[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2] make hypercall preemption checks consistent



On 04/03/14 13:06, Jan Beulich wrote:
>>>> On 04.03.14 at 13:10, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>> On 04/03/14 12:00, Jan Beulich wrote:
>>>>>> On 04.03.14 at 12:52, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> On 04/03/14 11:21, Jan Beulich wrote:
>>>>> - never preempt on the first iteration (ensure forward progress)
>>>>> - never preempt on the last iteration (pointless/wasteful)
>>>>> - do cheap checks first
>>>>>
>>>>> 1: common: make hypercall preemption checks consistent
>>>>> 2: x86: make hypercall preemption checks consistent
>>>>>
>>>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>>>
>>>> All in all, this is a good improvement over what is currently present.
>>>>
>>>> However, given the overhead of creating continuations (particularly for
>>>> 32bit HVM guests, which have been seen to unconditionally fail the
>>>> preemption check by the time the compat layer has run), some of these
>>>> operations would probably be better having more than a single guaranteed
>>>> operation.
>>>
>>> I agree, but I wanted to do one step at a time. Judging how much
>>> work we want to permit done between preemption points will be
>>> either heavy guess work, or require quite a bit of performance
>>> measurement...
>>
>> Perhaps something time-based?  Record the time at start and make
>> hypercall_preempt_check() return true if more than T time has elapsed?
> 
> That's certainly an interesting idea. But it doesn't remove the need
> to determine the actual value of the parameter to use (T in this case).

Sure, but it should be just one parameter for all hypercalls instead of
having to consider each one separately.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.