[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Just an update and some minor Questions



1)

When I use "sudo lspci | grep -i usb" I get:
00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05)
00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05)
03:00.0 USB controller: Etron Technology, Inc. EJ168 USB 3.0 Host Controller (rev 01)
04:00.0 USB controller: Etron Technology, Inc. EJ168 USB 3.0 Host Controller (rev 01)

I have two of each USB type controller (2.0 and 3.0).  Of the USB 2.0 PCI busses, one refers to the onboard USB ports, the other to the USB headers on the board.

Similarily I have one pair of USB 3.0 ports on the back, and a USB 3.0 header for a total of 4 USB 3.0 ports 2 by 2, since it came with a PCI plate and connector for the two extra USB 3.0 ports.

As far as using USB 3.0 in Dom0, I simply installed Linux and they worked plug and play.  I don't believe I installed any drivers.

I have used all sorts of input devices, but no external hard drives yet.  Logitech G11 gaming keyboard, a Logitech M505 wireless mouse, a Microsoft USB laser mouse, and currently two wireless Rapoo keyboards, one with a touchpad.



I was able to pass the USB 3.0 devices to Windows and install the motherboard drivers in Windows, but they were glitchy.  On one occasion after installing I got bluescreens, but even when it "worked" the ports were not compatible with all devices.  The whole point of USB 3.0 is greater speeds, but when devices like my Happauge HD PVR and Logitech C910 webcam didn't get any video (only audio) that kinda defeated the purpose.  Since I was only able to get various input devices working I decided to leave them to Dom0 instead, since my USB 2.0 PCI busses had substantially more USB ports.



Have you tried running "lsusb" to see if it saw the connected devices?  lspci will show you the busses not the connected devices.  I haevn't tried external HDD's in the USB 3.0 ports yet.




2)

Linux/Unix often referred to as *nix.  Linux is based off of Unix, if you know one you basically know the other.  You can be an experience Unix user say from a company that may be using Unix, and you can adapt easily to a Linux environment, as most of the commands etc are portable.  You may have noticed that there are a ton of different Linux distributions, the same goes for Unix, and the only reason that is acceptable is because the command sets don't change.



3)

Yes, you can tell LVM to strip data like RAID, this exists and works good, but it has its downsides.

Downside #1:  Without a RAID 1/10 mirror underneath you are subject to data loss.  If you choose to use RAID 1 and LVM Stripped you end up with 4 disks (each PV has a mirror).  The benefits here are varied IO for LVM management, but probably a bit less speed than RAID 10.  It's a tradeoff.


Downside #2:  I'm using an SSD, which I have tested speeds in excess of 350Mbps read on from within Linux and 450Mbps in Windows.  If I wanted to do what they suggest, I would need at least 2 240GB SSD's, not ideal for price, and I don't know if this would solve the problem because the Disk IO speeds should address IO availability unless my logic is wrong.


4)

I would have to do more reading to give you a clear answer on this.

I just went with what came with the system during installation, since I setup my RAID from there.

When I type "sudo which dm" and hit tab, nothing comes up so I may just not know the command line tool names, but mdadm exists on my system.

When I booted to Ubuntu Live CD to backup my data, my RAID was not found since it didn't install the tools.  I installed dmraid with apt-get and it was able to see my RAID.

In conclusion, the fact that Ubuntu saw my RAID with dmraid installed means that either they are both compatible utilities, or dmraid and mdadm are packaged together.  Perhaps mdadm is the command tool that comes with dmraid?  Again, I never bothered to research this since it hasn't really been an issue.





RAID is the priority in the setup, by which I mean if you create a RAID 5 it will be treated as a single disk.  LVM will not see it as three separate devices.

Remember, the benefit of RAID 5 is data redundancy at a 2:3 ratio of size to cost.  If you want to instead, you can omit RAID 5 and accomplish essentially a 3-disk RAID 0 by telling LVM to strip data across the three, but then you have no redundancy, if one disk goes so too does all your data.


Again, if you are using LVM to strip data you're major benefit is that LVM is now in charge of segregated IO.  If you have 3 disks LVM can split IO operations between them accordingly.  This can result in increased speeds in the case of a multiple simultaneous LV's being accessed in the same VG (Hence many HVM's running).

The reason I believe it wouldn't help an SSD user is because the main reason RAID helps is that Disk IO on HDD's is SLOW!  Disk IO has a longer wait time for write/read completion before it can start the next operation.

However, this MIGHT explain the throughput, when I experienced the slowdowns is when I am writing large amounts of data, small files are immediate.  This could be an indication that the disk knows it is busy so it throttles large IO processes to allow other small processes to go first.

I will have to do some tests when I get the time.


I wouldn't know, I have never used Acronis.  LVM is a Linux utility, if it is used while Linux is running then LVM is running and Acronis uses Linux which will use LVM to make it work.  If Acronis requires Linux to be shut down, then LVM will be shut down, the Acronis can't see individual LV's (but you can still backup PV's*).

Yes, LVM has snapshot functionality, I haven't used it though.  It freezes a partition while you dd or cp the contents.  This is helpful if you are trying to backup a running system, but I prefer to shut down the system when making backups of it.  So I shut down windows before using "dd" to copy it to a backup drive.  For the main Linux system, I use a live CD (hence Ubuntu).



I treat LVM like a file/folder system.  I organize VG's by Type, I have "Xen" which is all running systems, I have "nas" which is my storage backup, and I have "backups" which is self-explanitory.  Then I create PV's in each for each "thing".  So Xen has windows, nginx, pfsense, and three pv's for Dom0 (since it isn't running as an HVM it gets one for root, one for home, and one for swap).  If you consider it like a file system, what I have setup is just a folder with more folders inside it.


I hope this helps explain a bit more.

~Casey


On Sat, May 12, 2012 at 1:07 PM, <cyberhawk001@xxxxxxxxx> wrote:
1.) SO, Casey DeLorme your ASRock motherboard has onboard USB3 controller. How were you able to get it working and be able to use it in your Dom0, as you have explained you were not able to pass it thru to DomU but left it usable for Dom0. For me, if i run the command lspci -k it shows up as the follwoing:

03:00.0 USB controller: Etron Technology, Inc. EJ168 USB 3.0 Host Controller (rev 01)
               Subsystem: ASRock Incorporation Device 7023
               Kernel driver in use: xhci_hcd
04:00.0 USB controller: Etron Technology, Inc. EJ168 USB 3.0 Host Controller (rev 01)
               Subsystem: ASRock Incorporation Device 7023
               Kernel driver in use: xhci_hcd
05:00.0 USB controller: Etron Technology, Inc. EJ168 USB 3.0 Host Controller (rev 01)
               Subsystem: ASRock Incorporation Device 7023
               Kernel driver in use: xhci_hcd

So my Xen with Kernel 3.3.4 is able to see my USB3 controllers, BUT when i plug anything into the ports, nothing happens. SO it doesn't seem to have any drivers installed or something and it just doesn't work. SO, was wondering what driver or anything you need to make Wheezy mount a device using the USB 3.0? I also don't quite understand why it shows 3 controllers as, as far as i know, there is only 2 USB3 connectors on the mobo where EACH can handle 2 USB3 ports. But maybe there is a 3rd one i didn't see.

2.) On a previous thread you replied to, Re: [Xen-users] gaming on multiple OS of the same machine?, you say "Also, if you are not confident in your ability to work with *nix, I wouldn't advise it.  I had spent two years tinkering with Web Servers in Debian, so I thought I would have an easy time of things.". SOOO, what is *nix all about?

3.) I was reading this article, http://www.redhat.com/magazine/009jul05/features/lvm2/#fig-striped and there it says that when using LVM, the lvcreate command uses linear mappings by default. But, in LVM, there is 2 ways to do mapping, LVM linear mapping and LVM striped mapping (4 physical extents per stripe). Did you or have you ever messed with LVM striped mapping? OR you just left it at the default. As the article further explains, "The striped mapping allows a single logical volume to nearly achieve the combined performance of two PVs and is used quite often to achieve high-bandwidth disk transfers." SO, without testing it out, not sure which will give a better performance. Heck if you are having some IO issues with your setup, MAYBE this might help, but not sure.

4.) I was trying to figure out the difference between dmraid and mdadm. So far as i know, dmraid is the software RAID that takes care of RAIDing hard drives and etc, where mdadm is the RAID management tool you use to create and monitor the RAID setup. SO, do you use mdadm at all? OR do you just use LVM2 to install the LVM once you make the RAID5? On the other end, DO YOU even need to install or use mdadm?


************************************

When you create a VG you can select one or more PV's, and you give that VG a name.  When SSD's were super expensive, it was cheaper to buy several smaller SSD's and use a spanning LVM to create a 64GB SSD from 4x 16GB SSD's.  I personally prefer one PV to each VG, and I use RAID 5 for my data storage PV to create data redundancy.  Obviously spanning VG's are subject to common sense, you wouldn't combine a 5400 RPM drive with a 7200 RPM drive, or a SSD for that matter, as it would kill performance.
************************************
SO, by saying that you prefer to assign on PV to each VG being created, BUT if you lets say combine 3x 500GB HDDs using RAID5, you don't have direct control of which HDD the VG will get put on? OR do you? heheeh  I do agree that it makes a lot more sense to setup RAID or LVM on a set of identical HDD/SSDs instead of mixing it up, but if you setup a VG on a span of 3x HDDs in RAID5, and add LVs, the data will be stipped across all 3 HDDs anyways.

SO, it makes a bit more sense about how LVM works. In some ways you really only deal with VGs and LVs and having a 3-step process to making LVM seems like an extra step. But, since i didn't invent the LVM technology, than again there has to be a way to designate OR mark a partition as PV instead of a different partition type...lol.


********* ********* ********* ********* *********
These examples are using one physical disk.  Let's jump straight to the insanity that is Xen Virtual Machine configuration.

My System has 5 Hard Drives.  One SSD, one 500GB 5400 RPM Laptop HDD, and 3x 1TB HDD's.  I also have a single DVDRW (I know, so old-fashioned...).
********* ********* ********* ********* *********
heeh NA, it is not old-fashioned. I still use and rely on a DVDRW at certain times. I still need to occasionally burn a CD/DVD, plus i use the Acronis True Image backup a lot to create backup images of Windows and Linux Partitions, which is still only on a CD. However, i am also more curious about if Acronis True Image is even able to read LVM partitions to see to backup up certain LV partitions. If not, than LVM2 has a built in way to do snapshots, WHICH i am hoping can create that as a single image file that i can backup to an external HDD.


********* ********* ********* ********* *********
The three 1TB drives in RAID 5 create a single disk, I have GPT partition table, and the entire space is setup for LVM (hence my data storage PV).

The 500GB drive is purely for dd backups, so it has one PV spanning the drive as well.
********* ********* ********* ********* *********
Ya my setup is very similar, except i keep my 500GB Laptop HDD as an external HDD and attach via eSATA as needed.


********* ********* ********* ********* *********
My SSD is where things get really crazy, for fastest speeds I wanted to run all of my systems on it, so I have:

GPT Partition Table
256MB EFI Partition
256MB Boot Partition
Remainder to LVM (PV)
Volume Group (xen)
(LV) linux 8GB partition with Ext4 filesystem mounted to root ("/")
(LV) home 20GB Partition with Ext4 Filesystem - Mounted to "/home" for data storage (if linux dies, my data doesn't go with it)
(LV) swap 2GB assigned as SWAP
(LV) windows - This gets subpartitioned by windows
MSDos Partition Table & MBR
Windows Partition
(LV) nginx - Sub-partitioned by Debian Squeeze
msdos Partition Table & MBR (pointing to Grub)
Single Linux Partition
(LV) pfsense - Same as Debian except using FreeBSD
msdos Partition Table & MBR (pointing to Grub)
Single Unix Partition

What I am trying to show here is that each virtual machine will treat its LV like a physical drive, and you end up with a partition table and partitions inside of the LV (which FYI is basically a partition).

This is why you'll occasionally see Xen-Users posts about migrating a primary Windows system to an HVM partition, it's very difficult lol.

In conclusion, if you want four DomU's, you only need four LV's, as you can sub-partition each of them from inside the virtual machine, since it treats its LV as a physical drive.

So, create 4x 50GB Logical Volumes, assign each to a virtual machine, and when you install Linux select "Manual Partitioning" and specify 1GB for Swap.
********* ********* ********* ********* *********
humm, WOW well that is one complicated SSD setup ehehe. SO ya, setting up only only one VG and make as many LVs as needed seems like a lot simpler and easier way to handle LVMs instead of making more VGs.


********* ********* ********* ********* *********
GPU sharing in a consumer setting would let you buy one really kickass video card, and share it between several virtual machines, at the same time.  Normal GPU's don't have this functionality, hence why we have to pass each physical device directly to the Virtual Machine.

If you understand that much, you can probably understand why I mentioned this as being much needed for the consumer market.  It would be game changing (hah get it, because we use GPU's for gaming... sorry I'm lame).
********* ********* ********* ********* *********

heheh ya i get it AND it would be a very gametastic feature to have...lol. I am very much stoked about a GPU sharing tech working in Xen. As my video card is an ATI FirePro V7900 which is a pretty bad ass card by itself, the possibility of having lets say two VMs running off it directly while both having full graphics acceleration IS just plain AWESOME eheheh.


************************************
I agree that a good GUI can be a wonderful thing, but I think it's situational.  If you're anything like me you probably use keyboard shortcuts all the time, and that is "User Knowledge" not world knowledge, which means you're essentially using command line regularly already.  I happen to be exceptionally quick on a keyboard by comparison to the mouse.
************************************
ehheehhe, YAP i totally agree with you here..lol. I have been using a keyboard since 1990, SO i have come to use certain shortcuts religiously eheh. SO ya, it makes things a lot faster as there is NOTHING more i hate than sitting here and watching PC chug along and waiting for something to finish when they are suppose to be blazing fast eheheh.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.