[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: Interdomain comms


  • To: "Ronald G. Minnich" <rminnich@xxxxxxxx>
  • From: Eric Van Hensbergen <ericvh@xxxxxxxxx>
  • Date: Fri, 6 May 2005 11:49:44 -0500
  • Cc: Eric Van Hensbergen <ericvh@xxxxxxxxxxxxxxxxxxxxx>, Mike Wray <mike.wray@xxxxxx>, Harry Butterworth <harry@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 06 May 2005 16:49:32 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=BOiS6OtT5IDHBv7Uzfeg2In1pbcm9Pl1bnjwcID7h94xYYdksU60EtNiFeE0viK2CWTFiCD2WaFKDH4cDKT2NQEvSn7UnXwigHYBbQQ2h9DIA8wqY8fDmaz+YjGOSgJbdzULY11OSyOilOQdyvlN9F6JOoi2lD+3uPdvOIk3mj0=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On 5/6/05, Ronald G. Minnich <rminnich@xxxxxxxx> wrote:
> I would be happiest (is Eric Van Hensbergen out there?) if we could move
> to a model where the basic inter-domain communications is 9p, with all
> that implies. I can try to say more if there is interest.
> 

I've been developing such a model using IBM's research hypervisor -
but aside from some minor differences to the underlying transport
mechanisms, it should be easy enough to apply to a Xen framework (it
was my plan to look into that sometime this summer).

I selected 9P as the protocol because of its limited set of underlying
transport requirements, (it requires a reliable, in-order pipe and
otherwise doesn't make any assumptions), its flexibility (it can be
used to share file systems, devices, protocol stacks, communication
channels and application interfaces both across partitions and across
the network), and the simplicity of its design (and resulting
simplicity of implementation).

The initial target of my implementation was special-purpose
application kernels (built with a library-OS) running alongside a
Linux kernel acting as an I/O host and controller.  Our prototype
library-OS version of the client library is approximately 2000 lines
of code and the Linux file system client (http://v9fs.sf.net) is 4000
lines of code (extra weight added by VFS mapping routines).

This summer we will be looking at improving the efficiency of our
transport infrastructure, adding failure detection, recovery, and/or
fail over (including network fail over), and adding intelligent
network transports (we work over Ethernet and shared memory right now,
but we want to fully prototype Infiniband and related transports).

Some of the bits (like the v9fs Linux client and the u9fs user-space
server) are already available open-source.  We'll are in the process
of jumping through the paperwork hoops to open-source the library-OS
infrastructure and shared memory transport pieces.  There was a paper
presented on v9fs at Freenix 2005.

I'd be happy to answer any questions on this work.  I was planning on
presenting the approach to the Xen community once I had a more solid
prototype and some performance numbers, but Ron has let the cat out of
the bag. ;)

       -eric

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.