[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Just an update and some minor Questions



1.) SO, Casey DeLorme your ASRock motherboard has onboard USB3 controller. How were you able to get it working and be able to use it in your Dom0, as you have explained you were not able to pass it thru to DomU but left it usable for Dom0. For me, if i run the command lspci -k it shows up as the follwoing:

03:00.0 USB controller: Etron Technology, Inc. EJ168 USB 3.0 Host Controller (rev 01)
               Subsystem: ASRock Incorporation Device 7023
               Kernel driver in use: xhci_hcd
04:00.0 USB controller: Etron Technology, Inc. EJ168 USB 3.0 Host Controller (rev 01)
               Subsystem: ASRock Incorporation Device 7023
               Kernel driver in use: xhci_hcd
05:00.0 USB controller: Etron Technology, Inc. EJ168 USB 3.0 Host Controller (rev 01)
               Subsystem: ASRock Incorporation Device 7023
               Kernel driver in use: xhci_hcd

So my Xen with Kernel 3.3.4 is able to see my USB3 controllers, BUT when i plug anything into the ports, nothing happens. SO it doesn't seem to have any drivers installed or something and it just doesn't work. SO, was wondering what driver or anything you need to make Wheezy mount a device using the USB 3.0? I also don't quite understand why it shows 3 controllers as, as far as i know, there is only 2 USB3 connectors on the mobo where EACH can handle 2 USB3 ports. But maybe there is a 3rd one i didn't see.

2.) On a previous thread you replied to, Re: [Xen-users] gaming on multiple OS of the same machine?, you say "Also, if you are not confident in your ability to work with *nix, I wouldn't advise it.  I had spent two years tinkering with Web Servers in Debian, so I thought I would have an easy time of things.". SOOO, what is *nix all about?

3.) I was reading this article, http://www.redhat.com/magazine/009jul05/features/lvm2/#fig-striped and there it says that when using LVM, the lvcreate command uses linear mappings by default. But, in LVM, there is 2 ways to do mapping, LVM linear mapping and LVM striped mapping (4 physical extents per stripe). Did you or have you ever messed with LVM striped mapping? OR you just left it at the default. As the article further explains, "The striped mapping allows a single logical volume to nearly achieve the combined performance of two PVs and is used quite often to achieve high-bandwidth disk transfers." SO, without testing it out, not sure which will give a better performance. Heck if you are having some IO issues with your setup, MAYBE this might help, but not sure.

4.) I was trying to figure out the difference between dmraid and mdadm. So far as i know, dmraid is the software RAID that takes care of RAIDing hard drives and etc, where mdadm is the RAID management tool you use to create and monitor the RAID setup. SO, do you use mdadm at all? OR do you just use LVM2 to install the LVM once you make the RAID5? On the other end, DO YOU even need to install or use mdadm?


************************************
When you create a VG you can select one or more PV's, and you give that VG a name.  When SSD's were super expensive, it was cheaper to buy several smaller SSD's and use a spanning LVM to create a 64GB SSD from 4x 16GB SSD's.  I personally prefer one PV to each VG, and I use RAID 5 for my data storage PV to create data redundancy.  Obviously spanning VG's are subject to common sense, you wouldn't combine a 5400 RPM drive with a 7200 RPM drive, or a SSD for that matter, as it would kill performance.
************************************
SO, by saying that you prefer to assign on PV to each VG being created, BUT if you lets say combine 3x 500GB HDDs using RAID5, you don't have direct control of which HDD the VG will get put on? OR do you? heheeh  I do agree that it makes a lot more sense to setup RAID or LVM on a set of identical HDD/SSDs instead of mixing it up, but if you setup a VG on a span of 3x HDDs in RAID5, and add LVs, the data will be stipped across all 3 HDDs anyways.

SO, it makes a bit more sense about how LVM works. In some ways you really only deal with VGs and LVs and having a 3-step process to making LVM seems like an extra step. But, since i didn't invent the LVM technology, than again there has to be a way to designate OR mark a partition as PV instead of a different partition type...lol.


********* ********* ********* ********* *********
These examples are using one physical disk.  Let's jump straight to the insanity that is Xen Virtual Machine configuration.

My System has 5 Hard Drives.  One SSD, one 500GB 5400 RPM Laptop HDD, and 3x 1TB HDD's.  I also have a single DVDRW (I know, so old-fashioned...).
********* ********* ********* ********* *********
heeh NA, it is not old-fashioned. I still use and rely on a DVDRW at certain times. I still need to occasionally burn a CD/DVD, plus i use the Acronis True Image backup a lot to create backup images of Windows and Linux Partitions, which is still only on a CD. However, i am also more curious about if Acronis True Image is even able to read LVM partitions to see to backup up certain LV partitions. If not, than LVM2 has a built in way to do snapshots, WHICH i am hoping can create that as a single image file that i can backup to an external HDD.


********* ********* ********* ********* *********
The three 1TB drives in RAID 5 create a single disk, I have GPT partition table, and the entire space is setup for LVM (hence my data storage PV).

The 500GB drive is purely for dd backups, so it has one PV spanning the drive as well.
********* ********* ********* ********* *********
Ya my setup is very similar, except i keep my 500GB Laptop HDD as an external HDD and attach via eSATA as needed.


********* ********* ********* ********* *********
My SSD is where things get really crazy, for fastest speeds I wanted to run all of my systems on it, so I have:

GPT Partition Table
256MB EFI Partition
256MB Boot Partition
Remainder to LVM (PV)
Volume Group (xen)
(LV) linux 8GB partition with Ext4 filesystem mounted to root ("/")
(LV) home 20GB Partition with Ext4 Filesystem - Mounted to "/home" for data storage (if linux dies, my data doesn't go with it)
(LV) swap 2GB assigned as SWAP
(LV) windows - This gets subpartitioned by windows
MSDos Partition Table & MBR
Windows Partition
(LV) nginx - Sub-partitioned by Debian Squeeze
msdos Partition Table & MBR (pointing to Grub)
Single Linux Partition
(LV) pfsense - Same as Debian except using FreeBSD
msdos Partition Table & MBR (pointing to Grub)
Single Unix Partition

What I am trying to show here is that each virtual machine will treat its LV like a physical drive, and you end up with a partition table and partitions inside of the LV (which FYI is basically a partition).

This is why you'll occasionally see Xen-Users posts about migrating a primary Windows system to an HVM partition, it's very difficult lol.

In conclusion, if you want four DomU's, you only need four LV's, as you can sub-partition each of them from inside the virtual machine, since it treats its LV as a physical drive.

So, create 4x 50GB Logical Volumes, assign each to a virtual machine, and when you install Linux select "Manual Partitioning" and specify 1GB for Swap.
********* ********* ********* ********* *********
humm, WOW well that is one complicated SSD setup ehehe. SO ya, setting up only only one VG and make as many LVs as needed seems like a lot simpler and easier way to handle LVMs instead of making more VGs.


********* ********* ********* ********* *********
GPU sharing in a consumer setting would let you buy one really kickass video card, and share it between several virtual machines, at the same time.  Normal GPU's don't have this functionality, hence why we have to pass each physical device directly to the Virtual Machine.

If you understand that much, you can probably understand why I mentioned this as being much needed for the consumer market.  It would be game changing (hah get it, because we use GPU's for gaming... sorry I'm lame).
********* ********* ********* ********* *********

heheh ya i get it AND it would be a very gametastic feature to have...lol. I am very much stoked about a GPU sharing tech working in Xen. As my video card is an ATI FirePro V7900 which is a pretty bad ass card by itself, the possibility of having lets say two VMs running off it directly while both having full graphics acceleration IS just plain AWESOME eheheh.


************************************
I agree that a good GUI can be a wonderful thing, but I think it's situational.  If you're anything like me you probably use keyboard shortcuts all the time, and that is "User Knowledge" not world knowledge, which means you're essentially using command line regularly already.  I happen to be exceptionally quick on a keyboard by comparison to the mouse.
************************************
ehheehhe, YAP i totally agree with you here..lol. I have been using a keyboard since 1990, SO i have come to use certain shortcuts religiously eheh. SO ya, it makes things a lot faster as there is NOTHING more i hate than sitting here and watching PC chug along and waiting for something to finish when they are suppose to be blazing fast eheheh.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.