[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] VIRTIO - compatibility with different virtualization solutions



On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
> On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@xxxxxxxxxxx> wrote:
> > Anthony Liguori <anthony@xxxxxxxxxxxxx> writes:
> >> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@xxxxxxxxxxx> wrote:
> >>> Daniel Kiper <daniel.kiper@xxxxxxxxxx> writes:
> >>>> Hi,
> >>>>
> >>>> Below you could find a summary of work in regards to VIRTIO 
> >>>> compatibility with
> >>>> different virtualization solutions. It was done mainly from Xen point of 
> >>>> view
> >>>> but results are quite generic and can be applied to wide spectrum
> >>>> of virtualization platforms.
> >>>
> >>> Hi Daniel,
> >>>
> >>>         Sorry for the delayed response, I was pondering...  CC changed
> >>> to virtio-dev.
> >>>
> >>> From a standard POV: It's possible to abstract out the where we use
> >>> 'physical address' for 'address handle'.  It's also possible to define
> >>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
> >>> Xen-PV is a distinct platform from x86.
> >>
> >> I'll go even further and say that "address handle" doesn't make sense too.
> >
> > I was trying to come up with a unique term, I wasn't trying to define
> > semantics :)
> 
> Understood, that wasn't really directed at you.
> 
> > There are three debates here now: (1) what should the standard say, and
> 
> The standard should say, "physical address"
> 
> > (2) how would Linux implement it,
> 
> Linux should use the PCI DMA API.
> 
> > (3) should we use each platform's PCI
> > IOMMU.
> 
> Just like any other PCI device :-)
> 
> >> Just using grant table references is not enough to make virtio work
> >> well under Xen.  You really need to use bounce buffers ala persistent
> >> grants.
> >
> > Wait, if you're using bounce buffers, you didn't make it "work well"!
> 
> Preaching to the choir man...  but bounce buffering is proven to be
> faster than doing grant mappings on every request.  xen-blk does
> bounce buffering by default and I suspect netfront is heading that
> direction soon.
> 

FWIW Annie Li @ Oracle once implemented a persistent map prototype for
netfront and the result was not satisfying.

> It would be a lot easier to simply have a global pool of grant tables
> that effectively becomes the DMA pool.  Then the DMA API can bounce
> into that pool and those addresses can be placed on the ring.
> 
> It's a little different for Xen because now the backends have to deal
> with physical addresses but the concept is still the same.
> 

How would you apply this to Xen's security model? How can hypervisor
effectively enforce access control? "Handle" and "physical address" are
essentially not the same concept, otherwise you wouldn't have proposed
this change. Not saying I'm against this change, just this description
is too vague for me to understand the bigger picture.

But a downside for sure is that if we go with this change we then have
to maintain two different paths in backend. However small the difference
is it is still a burden.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.