[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with different virtualization solutions

On Fri, Feb 21, 2014 at 11:24:14AM +1030, Rusty Russell wrote:
> Daniel Kiper <daniel.kiper@xxxxxxxxxx> writes:
> > Hey,
> >
> > On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
> >> Ian Campbell <Ian.Campbell@xxxxxxxxxx> writes:
> >> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
> >> >> For platforms using EPT, I don't think you want anything but guest
> >> >> addresses, do you?
> >> >
> >> > No, the arguments for preventing unfettered access by backends to
> >> > frontend RAM applies to EPT as well.
> >>
> >> I can see how you'd parse my sentence that way, I think, but the two
> >> are orthogonal.
> >>
> >> AFAICT your grant-table access restrictions are page granularity, though
> >> you don't use page-aligned data (eg. in xen-netfront).  This level of
> >> access control is possible using the virtio ring too, but noone has
> >> implemented such a thing AFAIK.
> >
> > Could you say in short how it should be done? DMA API is an option but
> > if there is a simpler mechanism available in VIRTIO itself we will be
> > happy to use it in Xen.
> OK, this challenged me to think harder.
> The queue itself is effectively a grant table (as long as you don't give
> the backend write access to it).  The available ring tells you where the
> buffers are and whether they are readable or writable.  The used ring
> tells you when they're used.
> However, performance would suck due to no caching: you'd end up doing a
> map and unmap on every packet.  I'm assuming Xen currently avoids that
> somehow?  Seems likely...
> On the other hand, if we wanted a more Xen-like setup, it would looke
> like this:
> 1) Abstract away the "physical addresses" to "handles" in the standard,
>    and allow some platform-specific mapping setup and teardown.

> 2) In Linux, implement a virtio DMA ops which handles the grant table
>    stuff for Xen (returning grant table ids + offset or something?),
>    noop for others.  This would be a runtime thing.

Or perhaps an KVM specific DMA ops (which is nop) and Xen ops.
Easy enough to implement.
> 3) In Linux, change the drivers to use this API.

> Now, Xen will not be able to use vhost to accelerate, but it doesn't now
> anyway.

Correct. Thought one could implement an ring of grant entries system
where the frontend and backend share it along with the hypervisor.

And when the backend tries to access said memory thinking it has mapped
to the frontend (but it has not yet mapped this memory yet), it traps to
the hypervisor which then does mapping for the backend of the frontend
pages. Kind of lazy-grant system.

Anyhow, all of that is just implementation details and hand-waving.

If we wanted we can extend vhost for when it plucks entries of the
virtq to call an specific platform API. For KVM it would be all
nops. For Xen it would do a magic pony show or such <more hand-waving>.

> Am I missing anything?

On a bit different topic:

I am unclear about the asynchronous vs synchronous nature of Virt configuration.
Xen is all about XenBus which is more of a callback mechanism. Virt does
its stuff on MMIO and PCI which are slow - but do get you the values.

Can we somehow make it clear that the configuration setup can be asynchronous?
That would also mean that in the future this configuration (say when migrating)
changes can be conveyed to the virtio frontends via an interrupt mechanism
(or callback) if the new host has something important to say?

> Cheers,
> Rusty.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.