[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] iscsi vs nfs for xen VMs
Hi, Quite funny. We are using Xen over iSCSI for years, and we are turning to Netapp NAS to migrate to NFS for the following reasons : - Easier setup, no matter how many xen hosts you have. - Avoid dealing with Cluster Filesystem. (We use OCFS2 quite successfully anyway.) - Some 3 years ago, we were using nbd + drbd + clvm2 + fenced + ... to have direct lvm in your VM works, but it's too complicated and too complex to maintain, especially if you have an issue or more than 2 dom0. - According to the bench I received, there is no difference in performances between NFS and iSCSI. There are some differences in some cases due to the different levels of cache. Regards, FranÃois. ----- Original Message ----- From: "Christopher Chen" <muffaleta@xxxxxxxxx> To: "Jia Rao" <rickenrao@xxxxxxxxx> Cc: xen-users@xxxxxxxxxxxxxxxxxxx Sent: Friday, 7 August, 2009 16:00:20 GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna Subject: Re: [Xen-users] iscsi vs nfs for xen VMs Jia: You're partially correct. iSCSI as a protocol has no problem allowing multiple initiators access to the same block device, but you're almost certain to run into corruption if you don't set up a higher-level locking mechanism to make sure your access is consistent across all devices. To state again: iSCSI is not in itself a protocol that provides all the features necessary for a shared filesystem. If you want to do that, you need to look into the shared filesystem space (OCFS2, GFS, etc). The other option is to set up individual logical volumes on the shared LUN for each VM. Note that this still requires a inter-machine locking protocol--in my case, Clustered LVM. There are quite a few of us who have gone ahead and used clustered LVM with the phy handler--this gives us the consistency on LVM data across the multiple machines, while we administratively restrict access to each logical volume to one machine at a time (unless we're doing a live migration). I hope this helps. Cheers cc On Fri, Aug 7, 2009 at 6:48 AM, Jia Rao<rickenrao@xxxxxxxxx> wrote: > Hi all, > > I used to host the disk images of my xen VMs in a nfs server and am > considering move to iscsi for performance purpose. > Here is the problem I encountered: > > For iscsi, there are two ways to export the virtual disk to the physical > machine hosting the VMs. > > 1. Export the virtual disk (at the target side , it is either an img file or > a lvm) as a physical device, e.g sdc, then boot the VMs using > "phy:/dev/sdc". > > 2. Export the partition containing the virtual disks (in this case, the > virtual disks are img files) to each physical machine as a physical device, > and then on each physical machine I mount the new device to the file system. > In this way, the img files are accessible from each physical machine > (similar as nfs), and the VMs are booted using tapdisk > "tap:aio/PATH_TO_IMGS_FILES". > > I prefer the second approach because I need tapdisk (each virtual disk is a > process in host machines) to control the IO priority among VMs. > > However, there is a problem when I share the LUN containing all the VM img > files among multiple hosts. > It seems that any modifications to the LUN (by writing some data to folder > that mounted LUN ) is not immediate observable at other hosts sharing the > LUN (In the case of nfs, the changes are immediate synchronized at all the > nfs clients). The changes are only observable when I umount the LUN and > remount it on other physical hosts. > > I searched the Internet, it seems that iscsi is not intended for sharing a > single LUN between multiple hosts. > Is it true or ,I need some specific configuration of the target or initiator > to make the changes immediately synchronized at multiple initiator? > > Thanks in advance, > Jia > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-users > -- Chris Chen <muffaleta@xxxxxxxxx> "The fact that yours is better than anyone else's is not a guarantee that it's any good." -- Seen on a wall _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |