[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 1/2] x86/mem-sharing: Bulk mem-sharing entire domains



On Tue, Jul 5, 2016 at 8:35 AM, George Dunlap <george.dunlap@xxxxxxxxxx> wrote:
> On Thu, Jun 23, 2016 at 4:42 PM, Tamas K Lengyel <tamas@xxxxxxxxxxxxx> wrote:
>>>> +            if ( !atomic_read(&d->pause_count) ||
>>>> +                 !atomic_read(&cd->pause_count) )
>>>> +            {
>>>> +                rcu_unlock_domain(cd);
>>>> +                rc = -EINVAL;
>>>> +                goto out;
>>>> +            }
>>>
>>> I realize that Jan asked for this, but I'm really not sure what good
>>> this is supposed to do, since the guest can be un-paused at any point
>>> halfway through the transaction.
>>>
>>> Wouldn't it make more sense to have this function pause and unpause
>>> the domains itself?
>>
>> The domains are paused by the user when this op is called, so this is
>> just a sanity check ensuring you are not issuing the op on some other
>> domain. So if this function would pause the domain as well all it
>> would do is increase the pause count. This is the only intended
>> use-case for this function at this time. It would make no sense to try
>> to issue this op on domains that are running, as that pretty much
>> guarantee that some of their memory has already diverged, and thus
>> bulk-sharing their memory would have some unintended side-effects.
>
> Yes, I understand all that.  But what I'm saying (and mostly this is
> actually to Jan that I'm saying it) is that this check, as written,
> seems pointless to me.  We're effectively *not* trusting the toolstack
> to pause the domain, but we *are* trust the toolstack not to unpause
> the domain before the operation is completed.  It seems to me we
> should either trust the toolstack to pause the domain and leave it
> paused -- letting it suffer the consequences if it fails to do so --
> or we should pause and unpause the domain ourselves.
>
> Adding an extra pause count is simple and harmless.  If we're going to
> make an effort to think about buggy toolstacks, we might as well just
> make it 100% safe.

Well, there are many many more ways the toolstack could cause issues
for this op so I would say let's just trust the toolstack to behave as
intended so I'll remove the pause check and point out in a comment
that the intended usecase assumes both domains are paused and remain
paused for the duration of the op.

>
>>> To start with, it seems like having a "bulk share" option which could
>>> do just a specific range would be useful as well as a "bulk share"
>>> which automatically deduped the entire set of memory.
>>
>> I have no need for such options.
>
> Yes, but it's not unlikely that somebody else may need them at some
> point in the future; and it's only a tiny bit of adjustment to make
> them more generally useful than just your current specific use case,
> so that we can avoid changing the interface, or adding yet another
> hypercall in the future.
>
>>> Secondly, structuring the information like the other memory operations
>>> -- for example, "nr_extents" and "nr_done" -- would make it more
>>> consistent, and would allow you to also to avoid overwriting one of
>>> the "in" values and having to restore it when you were done.
>>
>> I don't see any harm in clearing this value to 0 when the op finishes
>> so I don't think it would really make much difference if we have
>> another field just for scratch-space use. I'm fine with adding that
>> but it just seems a bit odd to me.
>
> Well clearing the value to zero seems a bit odd to me. :-)  But more
> to the point, it's valuable to have the interface 1) be flexible
> enough to bulk-share a range but not the entire address space 2) be
> similar enough to existing interfaces that nobody needs to figure out
> how it works.

I really doubt there will be any users for this feature but it's
simple enough of an adjustment that I'm fine with the change.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.