[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall
Hi Jan, On 13/06/2019 13:41, Jan Beulich wrote: On 13.06.19 at 14:32, <andrii.anisov@xxxxxxxxx> wrote:Jan, Julien, On 11.06.19 12:10, Jan Beulich wrote:At the very least such loops want a cpu_relax() in their bodies. But this being on a hypercall path - are there theoretical guarantees that a guest can't abuse this to lock up a CPU?Hmmm, I suggested this but it looks like a guest may call the hypercallmultipletime from different vCPU. So this could be a way to delay work on the CPU. I wanted to make the context switch mostly lockless and therefore avoidingtointroduce a spinlock.Well, constructs like the above are trying to mimic a spinlock without actually using a spinlock. There are extremely rare situation in which this may indeed be warranted, but here it falls in the common "makes things worse overall" bucket, I think. To not unduly penalize the actual update paths, I think using a r/w lock would be appropriate here.So what is the conclusion here? Should we go with trylock and hypercall_create_continuation() in order to avoid locking but still not fail to the guest?I'm not convinced a "trylock" approach is needed - that's something Julien suggested. I think the trylock in the context switch is a must. Otherwise you would delay context switch if the information get updated. I'm pretty sure we're acquiring other locks in hypercall context without going the trylock route. I am convinced though that the pseudo-lock you've used needs to be replaced by a real (and perhaps r/w) one, _if_ there is any need for locking in the first place. You were the one asking for theoretical guarantees that a guest can't abuse this to lock up a CPU. There are no way to guarantee that as multiple vCPUs could call the hypercall and take the same lock potentially delaying significantly the work. Regarding the need of the lock, I still can't see how you can make it safe without it as you may have concurrent call. Feel free to suggest a way. Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |