[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposal to allow setting up shared memory areas between VMs from xl config file



Hi Zhongze

This is a nice write-up. Some comments below. Feel free to disagree with
what I say below, this is more a discussion than picking on your design
or plan.

On Sat, May 13, 2017 at 01:01:39AM +0800, Zhongze Liu wrote:
> Hi, Xen developers,
> 
> I'm Zhongze Liu, a GSoC student of this year. Glad to meet you in the
> Xen Project.  As an initial step to implementing my GSoC proposal, which
> is still a draft,  I'm posting it here. And hope to hear from you your
> suggestions.
> 
> ====================================================
> 1. Motivation and Description
> ====================================================
> Virtual machines use grant table hypercalls to setup a share page for
> inter-VMs communications. These hypercalls are used by all PV
> protocols today. However, very simple guests, such as baremetal
> applications, might not have the infrastructure to handle the grant table.
> This project is about setting up several shared memory areas for inter-VMs
> communications directly from the VM config file.
> So that the guest kernel doesn't have to have grant table support to be
> able to communicate with other guests.
> 
> ====================================================
> 2. Implementation Plan:
> ====================================================
> 
> ======================================
> 2.1 Introduce a new VM config option in xl:
> ======================================
> The shared areas should be shareable among several VMs,
> every shared physical memory area is assigned to a set of VMs.
> Therefore, a “token” or “identifier” should be used here to uniquely
> identify a backing memory area.
> 
> 
> I would suggest using an unsigned integer to serve as the identifier.
> For example:
> 
> In xl config file of vm1:
> 
>     static_shared_mem = [“addr_range1= ID1”, “addr_range2 = ID2”]
> 
> In xl config file of vm2:
> 
>     static_shared_mem = [“addr_range3 = ID1”]
> 
> In xl config file of vm3:
> 
>     static_shared_mem = [“addr_range4 = ID2”]

I can envisage you need some more attributes: what about the attributes
like RW / RO / WO (or even X)?

Also, I assume the granularity of the mapping is a page, but as far as I
can tell there are two page granularity on ARM, you do need to consider
both and what should happen if you mix and match them. What about
mapping several pages and different VM use overlapping ranges?

Can you give some concrete examples? What does addr_rangeX look like in
practice?

> 
> 
> In the example above. A memory area A1 will be shared between
> vm1 and vm2 -- vm1 can access this area using addr_range1
> and vm2 using addr_range3. Likewise, a memory area A2 will be
> shared between vm1 and vm3 -- vm1 can access A2 using addr_range2
> and vm3 using addr_range4.
> 
> The shared memory area denoted by an identifier IDx will be
> allocated when it first appear, and the memory pages will be taken from
> the first VM whose static_shared_mem list contains IDx. Take the above
> config files for example, if we instantiate vm1, vm2 and vm3, one after
> another, the memory areas denoted by ID1 and ID2 will both be allocated
> in and taken from vm1.

Hmm... I can see some potential hazards. Currently multiple xl processes
are serialised by a lock, and you assumption is the creation is done in
order, but suppose some time later they can run in parallel. When you
have several "xl create" and they race with each other, what will
happen?

This can be solved by serialising in libxl or hypervisor, I think.
It is up to you to choose where to do it.

Also, please consider what happens when you destroy the owner domain
before the rest. Proper reference counting should be done in the
hypervisor.

> 
> ======================================
> 2.2 Store the mem-sharing information in xenstore
> ======================================
> This information should include the length and owner of the area. And
> it should also include information about where the backing memory areas
> are mapped in every VM that are using it. This information should be
> known to the xl command and all domains, so we utilize xenstore to keep
> this information. A current plan is to place the information under
> /local/shared_mem/ID. Still take the above config files as an example:
> 
> If we instantiate vm1, vm2 and vm3, one after another,
> “xenstore ls -f” should output something like this:
> 
> 
> After VM1 was instantiated, the output of “xenstore ls -f”
> will be something like this:
> 
>     /local/shared_mem/ID1/owner = dom_id_of_vm1
> 
>     /local/shared_mem/ID1/size = sizeof_addr_range1
> 
>     /local/shared_mem/ID1/mappings/dom_id_of_vm1 = addr_range1
> 
> 
>     /local/shared_mem/ID2/owner = dom_id_of_vm1
> 
>     /local/shared_mem/ID2/size = sizeof_addr_range1
> 
>     /local/shared_mem/ID2/mappings/dom_id_of_vm1 = addr_range2
> 
> 
> After VM2 was instantiated, the following new lines will appear:
> 
>     /local/shared_mem/ID1/mappings/dom_id_of_vm2 = addr_range3
> 
> 
> After VM2 was instantiated, the following new lines will appear:
> 
>     /local/shared_mem/ID2/mappings/dom_id_of_vm2 = addr_range4
> 
> When we encounter an id IDx during "xl create":
> 
>   + If it’s not under /local/shared_mem, create the corresponding entries
>      (owner, size, and mappings) in xenstore, and allocate the memory from
>      the newly created domain.
> 
>   + If it’s found under /local/shared_mem, map the pages to the newly
>       created domain, and add the current domain to
>       /local/shared_mem/IDx/mappings.
> 

Again, please think about destruction as well.

At this point I think modelling after POSIX shared memory makes more
sense. That is, there isn't one "owner" for the memory. You get hold of
the shared memory via a key (ID in your case?).

I'm not entirely sure if xenstore is right location to store such
information, but I don't have other suggestions either. Maybe when I
have better understanding of the problem I can make better suggestions.

> ======================================
> 2.3 mapping the memory areas
> ======================================
> Handle the newly added config option in tools/{xl, libxl}
> and utilize toos/libxc to do the actual memory mapping
> 
> 
> ======================================
> 2.4 error handling
> ======================================
> Add code to handle various errors: Invalid address,
> mismatched length of memory area etc.
> 
> ====================================================
> 3. Expected Outcomes/Goals:
> ====================================================
> A new VM config option in xl will be introduced, allowing users to setup
> several shared memory areas for inter-VMs communications.
> This should work on both x86 and ARM.
> 
> [See also: 
> https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Share_a_page_in_memory_from_the_VM_config_file]
> 

Overall I think this document is a good starting point. There are
details to hash out but we have a lot of time for that.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.