[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposal to allow setting up shared memory areas between VMs from xl config file

Hi Jan,

On 15/05/17 13:28, Jan Beulich wrote:
On 15.05.17 at 12:21, <julien.grall@xxxxxxx> wrote:
On 05/15/2017 09:52 AM, Jan Beulich wrote:
On 15.05.17 at 10:20, <julien.grall@xxxxxxx> wrote:
On 15/05/2017 09:08, Jan Beulich wrote:
On 12.05.17 at 19:01, <blackskygg@xxxxxxxxx> wrote:
1. Motivation and Description
Virtual machines use grant table hypercalls to setup a share page for
inter-VMs communications. These hypercalls are used by all PV
protocols today. However, very simple guests, such as baremetal
applications, might not have the infrastructure to handle the grant table.
This project is about setting up several shared memory areas for inter-VMs
communications directly from the VM config file.
So that the guest kernel doesn't have to have grant table support to be
able to communicate with other guests.

I think it would help to compare your proposal with the alternative of
adding grant table infrastructure to such environments (which I
wouldn't expect to be all that difficult). After all introduction of a
(seemingly) redundant mechanism comes at the price of extra /
duplicate code in the tool stack and maybe even in the hypervisor.
Hence there needs to be a meaningfully higher gain than price here.

This is a key feature for embedded because they want to be able to share
buffer very easily at domain creation time between two guests.

Adding the grant table driver in the guest OS as a high a cost when the
goal is to run unmodified OS in a VM. This is achievable on ARM if you
use passthrough.

"high cost" is pretty abstract and vague. And I admit I have difficulty
seeing how an entirely unmodified OS could leverage this newly
proposed sharing model.

Let's step back for a moment, I will come back on Zhongze proposal

Using grant table in the guest will obviously require the grant-table
driver. It is not that bad. However, how do you pass the grant ref
number to the other guest? The only way I can see is xenstore, so yet
another driver to port.

Just look at the amount of code that was needed to get PV drivers
to work in x86 HVM guests. It's not all that much. Plus making such
available in a new environment doesn't normally mean everything
needs to be written from scratch.

Even if PV drivers don't need to be written from scratch, it has a certain cost to port them to a new OS. By trying to make most of the VM interface agnostic to Xen, we potentially allow vendor to switch to Xen easily.

On Zhongze proposal, the share page will be mapped at the a specific
address in the guest memory. I agree this will require some work in the
toolstack, on the hypervisor side we could re-use the foreign mapping
API. But on the guest side there are nothing to do Xen specific.

So what is the equivalent of the shared page on bare hardware?

What's the benefit? Baremetal guest are usually tiny, you could use the
device-tree (and hence generic way) to present the share page for
communicating. This means no Xen PV drivers, and therefore easier to
move an OS in Xen VM.

Is this intended to be an ARM-specific extension, or a generic one?
There's no DT on x86 to pass such information, and I can't easily
see alternatives there. Also the consumer of the shared page info
is still a PV component of the guest. You simply can't have an
entirely unmodified guest which at the same time is Xen (or
whatever other component sits at the other end of the shared
page) aware.

The toolstack will setup the shared page in both the producer and consumer guest. This will be setup during the domain creation. My understanding is it will not be possible to share page after the two domains have been created.

This feature is not meant to replace grant-table but here to ease sharing a page between 2 guests without introducing any Xen knowledge in both guests.


Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.