[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-API] sharing NFS SRs


IMHO one of the weaknesses of the current NFS SR backend in XCP is that it a 
single SR cannot be shared between pools. This is because the backend relies on 
the xapi pool framework to prevent 

1. multiple hosts from coalescing the same vhds.

2. the same vhd being attached to two VMs at the same time.

3. a vhd being read one one node even after it has been coalesced and deleted 
on another

If multiple pools could safely share the same NFS SR then a cross-pool migrate 
(which is possible with the current code) wouldn't have to actually mirror the 

With this in mind I've been looking into NFS locking again. I realize this is 
a... tricky thing to get right... and google turns up lots of horror stories. 
Anyway, here's what I was thinking:

For handling (1) and (2), we would only need one lock file (really a "lease 
file") per vhd. In the event of a network interruption we already know that 
running VMs are likely to fail after 90s or so -- the maximum time (IIRC) a 
windows VM will allow a page file write to take. So we could

* explicitly tell tapdisk to shutdown after this long (since the VM will 
probably have blue-screened anyway)

* periodically refresh our leases, setting them to expire well after the 
tapdisks are guaranteed to have shutdown

So if a host leaves the network, all disks become unlocked a few minutes later 
and the VMs (and coalesce jobs) can safely be restarted on another pool. This 
could then be used as the foundation for a new "HA" feature, where only VMs 
whose I/Os have failed are shutdown and restarted.

>From an implementation point of view, this python library looks pretty good:


I'm not totally sure how to handle (3): would it be sufficient to periodically 
reopen the vhd chain in tapdisk, or just handle the error where a read fails 
and reopen the chain then?

Comments are welcome!


Xen-api mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.