[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ping: [PATCH] VMX: sync CPU state upon vCPU destruction



>>> On 21.11.17 at 15:07, <igor.druzhinin@xxxxxxxxxx> wrote:
> On 21/11/17 13:22, Jan Beulich wrote:
>>>>> On 09.11.17 at 15:49, <JBeulich@xxxxxxxx> wrote:
>>> See the code comment being added for why we need this.
>>>
>>> Reported-by: Igor Druzhinin <igor.druzhinin@xxxxxxxxxx>
>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> 
>> I realize we aren't settled yet on where to put the sync call. The
>> discussion appears to have stalled, though. Just to recap,
>> alternatives to the placement below are
>> - at the top of complete_domain_destroy(), being the specific
>>   RCU callback exhibiting the problem (others are unlikely to
>>   touch guest state)
>> - in rcu_do_batch(), paralleling the similar call from
>>   do_tasklet_work()
> 
> rcu_do_batch() sounds better to me. As I said before I think that the
> problem is general for the hypervisor (not for VMX only) and might
> appear in other places as well.

The question here is: In what other cases do we expect an RCU
callback to possibly touch guest state? I think the common use is
to merely free some memory in a delayed fashion.

> Those choices that you outlined appear to be different in terms whether
> we solve the general problem and probably have some minor performance
> impact or we solve the ad-hoc problem but make the system more
> entangled. Here I'm more inclined to the first choice because this
> particular scenario the performance impact should be negligible.

For the problem at hand there's no question about a
performance effect. The question is whether doing this for _other_
RCU callbacks would introduce a performance drop in certain cases.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.