[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: Interdomain comms



On 5/8/05, Andrew Warfield <andrew.warfield@xxxxxxxxx> wrote:
> Hi Eric,
> 
>    Your thoughts on 9P are all really interesting -- I'd come across
> the protocol years ago in looking into approaches to remote device/fs
> access but had a hard time finding details.  It's quite interesting to
> hear a bit more about the approach taken.
>

There's a bigger picture that Ron and I have discussed, but there are
lots of details that need to be resolved/proven before its worth
talking about.  The overall goal we are working towards is a simple
interface with equivalent or better performance.  The generality of
the approach we have discussed is appealing, but we are well aware it
won't be adopted unless we can show competitive performance and
reliability.

> 
>    For the more specific cases of FE/BE comms, I think the devil may
> be in the details more than the current discussion is alluding to.
> 

I agree completely, in fact there are likely several devils unique to
each target architecture ;)  But that's just the sort of thing that
keeps system programming interesting.  These sorts of things will only
be worked out once we have a prototype which to drive into the various
brick walls.

> 
> There are also a pile of complicating cases with regards cache
> eviction from a BE domain, migration, and so on that make the
> accounting really tricky.  I think it would be quite good to have a
> discussion of generalized interdomain comms address the current
> drivers, as well as a hypothetical buffer cache as potential cases.
> Does 9P already have hooks that would allow you to handle this sort of
> per-application special case?
>

Page table and cache manipulation would likely sit below the 9P layer
to keep things portable and abstract (as I've said earlier, perhaps
Harry's IDC proposal is the right layer to handle such things).  9P as
a protocol is quite flexible, but existing implementations are
somewhat simplistic and limited in regard to more advanced buffer/page
sharing capabilities.  We are exploring some of these issues in our
exploration of using Plan 9 and 9P in HPC cluster environments (using
cluster interconnects) -- in fact, the idea of using 9P between VMM
partitions fell out of that exploration.

> 
>    Additionally, I think we get away with a lot in the current drivers
> from a falure model that excludes transport.  The FE or BE can crash,
> and the two drivers can be written defensively to handle that.  How
> does 9P handle the strangenesses of real distribution?
> 

With existing 9P implementations, defensive drivers are the way to go.
 There have been three previous solutions proposed for failure
detection and recovery with 9P.  One handled recovery of 9P state
between client and server, the other handled reliability of the
underlying RPC transport, and the third was a layered file server
providing defensive semantics.  As I said earlier, we'll be exploring
these more fully (along with looking at fail over) this summer --
specifically in the context of VMMs.

9P isn't a magic bullet here, and there will be lots of issues that
will need to be dealt with either in the layer above it, or below it.

        -eric

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.