[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 5/5] evtchn: don't call Xen consumer callback with per-channel lock held
On Fri, Dec 4, 2020 at 6:29 AM Julien Grall <julien@xxxxxxx> wrote: > > Hi Jan, > > On 03/12/2020 10:09, Jan Beulich wrote: > > On 02.12.2020 22:10, Julien Grall wrote: > >> On 23/11/2020 13:30, Jan Beulich wrote: > >>> While there don't look to be any problems with this right now, the lock > >>> order implications from holding the lock can be very difficult to follow > >>> (and may be easy to violate unknowingly). The present callbacks don't > >>> (and no such callback should) have any need for the lock to be held. > >>> > >>> However, vm_event_disable() frees the structures used by respective > >>> callbacks and isn't otherwise synchronized with invocations of these > >>> callbacks, so maintain a count of in-progress calls, for evtchn_close() > >>> to wait to drop to zero before freeing the port (and dropping the lock). > >> > >> AFAICT, this callback is not the only place where the synchronization is > >> missing in the VM event code. > >> > >> For instance, vm_event_put_request() can also race against > >> vm_event_disable(). > >> > >> So shouldn't we handle this issue properly in VM event? > > > > I suppose that's a question to the VM event folks rather than me? > > Yes. From my understanding of Tamas's e-mail, they are relying on the > monitoring software to do the right thing. > > I will refrain to comment on this approach. However, given the race is > much wider than the event channel, I would recommend to not add more > code in the event channel to deal with such problem. > > Instead, this should be fixed in the VM event code when someone has time > to harden the subsystem. I double-checked and the disable route is actually more robust, we don't just rely on the toolstack doing the right thing. The domain gets paused before any calls to vm_event_disable. So I don't think there is really a race-condition here. Tamas
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |