[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] viridian: fix the HvFlushVirtualAddress/List hypercall implementation



>>> On 14.02.19 at 13:38, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Juergen Gross [mailto:jgross@xxxxxxxx]
>> Sent: 14 February 2019 12:35
>> 
>> On 14/02/2019 13:10, Paul Durrant wrote:
>> > v2:
>> >  - Use cpumask_scratch
>> 
>> That's not a good idea. cpumask_scratch may be used from other cpus as
>> long as the respectice scheduler lock is being held. See the comment in
>> include/xen/sched-if.h:
>> 
>> /*
>>  * Scratch space, for avoiding having too many cpumask_t on the stack.
>>  * Within each scheduler, when using the scratch mask of one pCPU:
>>  * - the pCPU must belong to the scheduler,
>>  * - the caller must own the per-pCPU scheduler lock (a.k.a. runqueue
>>  *   lock).
>>  */
>> 
>> So please don't use cpumask_scratch outside the scheduler!
> 
> Ah, yes, it's because of cpumask_scratch_cpu()... I'd indeed missed that. In 
> which case a dedicated flush_cpumask is still required.

And I didn't recall this aspect either - I'm sorry for misguiding you.
So Jürgen - thanks for spotting!

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.