[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v2]Proposal to allow setting up shared memory areas between VMs from xl config file



Sorry for the late reply.

I can see the thread already contains answers to some of my questions so
I will just reply to the bits that are still relevant.

On Fri, Jun 23, 2017 at 01:27:24AM +0800, Zhongze Liu wrote:
> Hi Wei,
> 
> Thank you for your valuable comments.
> 
> 2017-06-21 23:09 GMT+08:00 Wei Liu <wei.liu2@xxxxxxxxxx>:
[...]
> >> To handle all these, I would suggest using an unsigned integer to serve as 
> >> the
> >> identifier, and using a "master" tag in the master domain's xl config entry
> >> to announce that she will provide the backing memory pages. A separate
> >> entry would be used to describe the attributes of the shared memory area, 
> >> of
> >> the form "prot=RW".
> >
> > I think using an integer is too limiting. You would need the user to
> > know if a particular number is already used. Maybe using a number is
> > good enough for the use case you have in mind, but it is not future
> > proof. I don't know how sophisticated we want this to be, though.
> >
> 
> Sounds reasonable. I chose integers because I think integers are fast
> and easy to
> manipulate. But integers are somewhat hard to memorize and this isn't
> a good thing
> from a user's point of view. So maybe I'll make it a string with a
> maximum size of 32
> or longer.
> 

Sounds reasonable.

[...]
> >>  granularity = 4k, prot = RW, master”]
> >>
> >> In xl config file of vm2:
> >>
> >>     static_shared_mem = ["id = ID1, begin = gmfn5, end = gmfn6,
> >>                           granularity = 4k, prot = RO”]
> >>
> >> In xl config file of vm3:
> >>
> >>     static_shared_mem = ["id = ID2, begin = gmfn7, end = gmfn8,
> >>                           granularity = 4k, prot = RW”]
> >>
> >> gmfn's above are all hex of the form "0x20000".
> >>
> >> In the example above. A memory area ID1 will be shared between vm1 and vm2.
> >> This area will be taken from vm1 and mapped into vm2's stage-2 page table.
> >> The parameter "prot=RO" means that this memory area are offered with 
> >> read-only
> >> permission. vm1 can access this area using gmfn1~gmfn2, and vm2 using
> >> gmfn5~gmfn6.
> >> Likewise, a memory area ID will be shared between vm1 and vm3 with read and
> >> write permissions. vm1 is the master and vm2 the slave. vm1 can access the
> >> area using gmfn3~gmfn4 and vm3 using gmfn7~gmfn8.
> >>
> >> The "granularity" is optional in the slaves' config entries. But if it's
> >> presented in the slaves' config entry, it has to be the same with its 
> >> master's.
> >> Besides, the size of the gmfn range must also match. And overlapping 
> >> backing
> >> memory areas are well defined.
> >>
> >
> > What do you mean by "well defined"?
> 
> Em...I think I should have put it in a more clear way. In fact, I mean
> that overlapping
> areas are allowed, and when two areas overlap with each other, any
> operations done
> on the overlapping area will be seen on both sides. Besides this, they
> just act like two
> independent areas. And the job of serializing the access to the
> overlapping area is
> left to the user.
> 

OK. "Well defined" means "clearly defined or described" but I didn't see
any definition or description of it. Just use "allowed" should be OK.

> >
> > Why is inserting a sub-range not allowed?
> >
> 
> This is also a feature under consideration.Maybe the use cases that I have
> in mind is not that complicated, so I chose to keep it simple. But
> after giving it
> a second thought, I found this will not add too much complexity to the code 
> and
> will be useful in some cases. So I think I'll allow this in my next
> version of the proposal.
> 

That's what I thought as well. Essentially it is not any harder than
implementing the overlapping case.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.