[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v4]Proposal to allow setting up shared memory areas between VMs from xl config file



Hi Edgar,

Thank you for being interested in this project.

2017-08-01 5:55 GMT+08:00 Edgar E. Iglesias <edgar.iglesias@xxxxxxxxxx>:
> On Mon, Jul 31, 2017 at 02:30:47PM -0700, Stefano Stabellini wrote:
>> On Mon, 31 Jul 2017, Edgar E. Iglesias wrote:
>> > On Fri, Jul 28, 2017 at 09:03:15PM +0800, Zhongze Liu wrote:
>> > > ====================================================
>> > > 1. Motivation and Description
>> >
>> > Hi,
>> >
>> > I think this looks quite useful. I have a few comments inline.
>>
>> Hi Edgar, thanks for giving it a look!
>>
>>
>> > > ====================================================
>> > > Virtual machines use grant table hypercalls to setup a share page for
>> > > inter-VMs communications. These hypercalls are used by all PV
>> > > protocols today. However, very simple guests, such as baremetal
>> > > applications, might not have the infrastructure to handle the grant 
>> > > table.
>> > > This project is about setting up several shared memory areas for 
>> > > inter-VMs
>> > > communications directly from the VM config file.
>> > > So that the guest kernel doesn't have to have grant table support (in the
>> > > embedded space, this is not unusual) to be able to communicate with
>> > > other guests.
>> > >
>> > > ====================================================
>> > > 2. Implementation Plan:
>> > > ====================================================
>> > >
>> > > ======================================
>> > > 2.1 Introduce a new VM config option in xl:
>> > > ======================================
>> > >
>> > > 2.1.1 Design Goals
>> > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> > >
>> > > The shared areas should be shareable among several (>=2) VMs, so every 
>> > > shared
>> > > physical memory area is assigned to a set of VMs. Therefore, a “token” or
>> > > “identifier” should be used here to uniquely identify a backing memory 
>> > > area.
>> > > A string no longer than 128 bytes is used here to serve the purpose.
>> > >
>> > > The backing area would be taken from one domain, which we will regard
>> > > as the "master domain", and this domain should be created prior to any
>> > > other "slave domain"s. Again, we have to use some kind of tag to tell who
>> > > is the "master domain".
>> > >
>> > > And the ability to specify the permissions and cacheability (and 
>> > > shareability
>> > > for ARM guest's) of the pages to be shared should also be given to the 
>> > > user.
>> > >
>> > > 2.2.2 Syntax and Behavior
>> > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> > > The following example illustrates the syntax of the proposed config entry
>> > > (suppose that we're on x86):
>> > >
>> > > In xl config file of vm1:
>> > >   static_shm = [ 'id=ID1, begin=0x100000, end=0x200000, role=master,
>> > >                   cache_policy=x86_normal, prot=r2',
>> > >
>> > >                  'id=ID2, begin=0x300000, end=0x400000, role=master' ]
>> > >
>> > > In xl config file of vm2:
>> > >   static_shm = [ 'id=ID1, offset = 0, begin=0x500000, end=0x600000,
>> > > role=slave, prot=ro' ]
>> > >
>> > > In xl config file of vm3:
>> > >   static_shm = [ 'id=ID2, offset = 10000, begin=0x690000,
>> > > end=0x800000, role=slave, prot=ro' ]
>> > >
>> > > where:
>> > >   @id                   The identifier of the backing memory area.
>> > >                         Can be any string that matches the regexp "[^ 
>> > > \t\n,]+"
>> > >                         and no longer than 128 characters
>> > >
>> > >   @offset               Can only appear when @role = slave. The sharing 
>> > > will
>> > >                         start from the beginning of backing memory area 
>> > > plus
>> > > this offset. If not set, it defaults to zero.
>> > >                         Can be decimals or hexadecimals of the form 
>> > > "0x20000",
>> > >                         and should be the multiple of the hypervisor page
>> > > granularity (currently 4K on both ARM and x86).
>> > >
>> > >   @begin/end            The boundaries of the shared memory area. The 
>> > > format
>> > >                         requirements are the same with @offset.
>> >
>> > I'm assuming this is all specified in GFN and also not MFN contigous?
>> > Would it be possible to allow the specification of MFN mappings
>> > that are contigous?
>>
>> That could be done with the iomem= parameter?
>
> The missing part from the iomem paramater is the attributes, cacheability, 
> shared etc.
> But we could perhaps add that to iomem somehow.
>
>
>>
>>
>> > This would be useful to map specific kinds of memory (e.g On Chip RAMs).
>> >
>> > Other use-cases are when there are not only guests sharing
>> > the pages but also devices. In some cases these devs may be locked in
>> > with low-latency access to specific memory regions.
>> >
>> > Perhaps something like the following?
>> > addr=gfn@<mfn>
>> > size=0x1000
>> >
>> > with mfn being optional?
>>
>> I can see that it might be useful, but we are trying to keep the scope
>> small to be able to complete the project within the limited timeframe
>> allowed by GSoC. We only have one month left! We risk not getting the
>> feature completed.  Once this set of features is done and committed, we
>> can expand on it.

Yes, this is just going to be the very initial working version. And
I'm willing to
help to implement the missing features and maintain it in the long run
after GSoC.

>>
>> For the sake of this document, we should make clear that addresses are in
>> the gfn space and memory is allocated to the master domain.
>
> OK, I understand. :-)
> Yes, documenting it would be good.

Will add this into the next version of this proposal.

>
>
>>
>>
>> > >   @role                 Can only be 'master' or 'slave', it defaults to 
>> > > 'slave'.
>> > >
>> > >   @prot                 When @role = master, this means the largest set 
>> > > of
>> > >                         stage-2 permission flags that can be granted to 
>> > > the
>> > >                         slave domains.
>> > > When @role = slave, this means the stage-2 permission
>> > >                         flags of the shared memory area.
>> > >                         Currently only 'rw' is supported. If not set. it
>> > >                         defaults to 'rw'.
>> > >
>> > >   @cache_policy         The stage-2 cacheability/shareability attributes 
>> > > of the
>> > >                         shared memory area. Currently, only two policies 
>> > > are
>> > >                         supported:
>> > >                           * ARM_normal: Only applicable to ARM guests. 
>> > > This
>> > >                                         would mean Inner and Outer 
>> > > Write-Back
>> > >                                         Cacheable, and Inner Shareable.
>> >
>> >
>> > Is there a reason not to set this to Outer Shareable?
>> > Again, mainly useful when these pages get shared with devs as well.
>> >
>> > The guest can always lower it to Inner Shareable via S1 tables if needed.
>>
>> I don't think we can support memory sharing with devices in this version
>> of the document (see above about GSoC timelines). Normal memory is inner
>> shareable in Xen today, it makes sense to default to that.
>
> I thought we mapped RAM as Outer shareable to guests but you seem to be right.
> I think we should be mapping all RAM as Outer Shareable and then let the
> guest decide what is Inner and what is Outer via it's S1 tables.
> Right now it would be impossible to be Coherent with a DMA device outside
> of the Inner domain...
>
> Perhaps we should fix that and then ARM_normal would by itself become Outer.
> If there's agreement I can test it and send a patch.

This sounds reasonable. If you send the patch, could you please add me
to the Cc list?
so that I can see what's going on and change the document accordingly.

Cheers,

Zhongze Liu

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.