[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] File system layout
> My supervisor wants to create a LV then give that to a VM and then > create a VG with LV's under it inside the VM. I have read on the list > where this will give bad performance inside the VM. It won't necessarily give *bad* performance but doing the two layers of translation is a bit unfortunate. What I think is perhaps more significant is the management side of things: if you've got two layers of LVM nested like that, it's no longer a simple matter of just mounting a DOS partition inside an LVM volume if you want to inspect the guest's filesystems. So it makes it harder to access an inactive guest's filesystem (you shouldn't mount an active guest's filesystem, in any case). I don't know how awkward LVM makes doing this sort of thing. To me, the simplicity of doing all LVM management in dom0 and hiding it from the guest appeals. You can easily pass through the LVs in dom0 to the guest as separate drives; if it's a PV guest then you can even hotplug them. It's actually possible to pass separate LVM volumes to appear as individual partitions in the guest - this has the advantage that there's no MS-DOS partition table to worry about, just a load of linear LVs with filesystems on. > Can somebody point > me to a recent, we are using 3.2, breakdown of file system performance > inside of VM's? Afraid I don't have such a document. Raw disks / partitions theoretically have the highest performance as a means of storing domU disks. LVM is probably next. File-based disks are probably the lowest performers - of the two alternative implementations, tap:aio disks are supposed to be preferable to file: disks. Hope that helps some! Cheers, Mark -- Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/) _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |