[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Fatal crash on xen4.2 HVM + qemu-xen dm + NFS



Ian,

--On 22 January 2013 10:13:58 +0000 Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:

AIUI it is intentional. The safe option (i,e, the one which doesn't risk
trashing the filesystem) is not to cache (so that writes have really hit
the disk when the device says so). We want to always use this mode with
PV devices.

Except it isn't the safe option because O_DIRECT (apparently) is not safe
with NFS or (we think) iSCSI, and in a far nastier way. And fixing that
is more intrusive (see your patch series).

I can't test it at the moment (travelling), but if not using O_DIRECT
fixed things, would you accept something that somehow allowed one to
specify an option of not using O_DIRECT? I need to look at how to get
the disk config line from xen_disk.c if so.

However the performance impact of not caching the emulated (esp IDE)
devices is enormous and so the trade off was made to cache them on the
theory that they would only be used during install and early boot and
that OSes would switch to PV drivers ASAP. Also AIUI the emulated IDE
interface doesn't have any way to request that something really hits the
disk anyway, but I might be misremembering that.

You mean the equivalent of flush/fua? I can't comment, but I believe
the write barrier stuff isn't in 4.2 in xen_disk either.

Check the archives from around the time this change was made, I'm pretty
sure there was some discussion of why things are this way.

I tried but couldn't find anything. I could only find the patches
going to qemu-devel and not on the main list, but I may well have missed
them as I'm relying on Google finding stuff in archives at that time.

--
Alex Bligh

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.