[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] OASIS Virtual I/O Device (VIRTIO) TC



On Tue, Aug 27, 2013 at 12:06:59PM +0100, Wei Liu wrote:
> On Tue, Aug 27, 2013 at 11:37:41AM +0200, Daniel Kiper wrote:
> [...]
> > > Yes this is true.
> > >
> > > However the statement (?) in Daniel's first email (based upon the
> > > "Virtio PCI Card Specification") gave me the impression that they
> > > planned to support virtual PCI only...
> >
> > Right, VIRTIO TC started its work on the base of "Virtio PCI Card 
> > Specification".
> > However, it was stated immediately that PCI specification should be excluded
> > from main specification and should form a separate attachment. It means that
> > VIRTIO would not require PCI bus in the future. Additionally, current 
> > document
> > contains specification for MMIO which looks like good alternative for PCI.
> >
> > > > virt_mmio instead these days. A shared ring for cfg + evtchn model for
> > > > notification doesn't seem too far fetched to me.
> > > >
> > > > A far bigger problem IMHO with virtio is that it basically discards the
> > > > Xen security model. I don't know how hard it will be to retrofit gnttab
> > > > based access control into the virtio protocols. A lot would be my
> > > > guess...
> >
> > I am going to convince VIRTIO TC that new spec should be Xen friendly too.
> >
> > > Indeed, seems there is lots of work to do...
> >
> > By the way, Wei, Konrad told me that you worked on VRITIO implementation
> > for Xen and you have hands-on-experience with it. Could you tell me what
> > issues you met during your work on VRITIO support for Xen environment?
> >
>
> I would not say I have very deep understanding of it because I only
> worked on it part-time for two months. But I will try to best to
> remember and elaborate. :-)
>
> For HVM guests Virtio works just fine. Xen uses QEMU as device model
> which means we have Virtio devices for free. In this case all Virtio
> devices use virtual PCI bus provided by QEMU. Basically we get most of
> the things for free.
>
> (A recent email on netdev@ from an engineer in Huawei suggests that
> virto-net + vhost-net works for him, which is really great. Ref: Is
> fallback vhost_net to qemu for live migrate
> availableï<521C1DCF.5090202@xxxxxxxxxx>)
>
> For PV guests it requires more work. Virtio is designed to be portable.
> The PCI and MMIO implementations your mentioned are in fact called
> "transport". As PV guests don't have virtual PCI buses and MMIO don't
> seem to work on Xen PV either (it's very possible that I'm wrong on this
> MMIO thing as I'm not familiar with it), I had to implement a new
> transport. TBH the biggest issue is more of an engineering problem --
> virtual PCI is sync while Xen DomU and Dom0 run in prarallel.
> The approach I chose back then was just a hack mimicking PCI
> which made everything slow. It was not meant to be canonical or
> upstreamable. If I were to do this now I would choose to apply the Xen
> PV model to the transport.
>
> To make the whole thing work we also need backend supporting Virtio. The
> closest thing at hand was QEMU, but again, the transport had to be
> re-implemented.
>
> The hack is on http://wiki.xen.org/wiki/Virtio_On_Xen

Thanks for lengthy explanation. It looks that a lot of work has been done.
Could you tell or point where I could find info how buffers/rings are
shared between backend and frontend? Is grant page mechanism used at all?
How buffers/rings addresses are passed between backend and frontend?

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.