[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.10] libs/evtchn: Remove active handler on clean-up or failure



On 11/14/2017 11:51 AM, Ian Jackson wrote:
Ross Lagerwall writes ("Re: [PATCH for-4.10] libs/evtchn: Remove active handler on 
clean-up or failure"):
On 11/10/2017 05:10 PM, Julien Grall wrote:
Commit 89d55473ed16543044a31d1e0d4660cf5a3f49df "xentoolcore_restrict_all:
Implement for libxenevtchn" added a call to register allowing to
restrict the event channel.

However, the call to deregister the handler was not performed if open
failed or when closing the event channel. This will result to corrupt
the list of handlers and potentially crash the application later one.

Sorry for not spotting this during review.
The fix is correct as far as it goes, so:

Acked-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>

The call to xentoolcore_deregister_active_handle is done at the same
place as for the grants. But I am not convinced this is thread safe as
there are potential race between close the event channel and restict
handler. Do we care about that?
...
However, I think it should call xentoolcore__deregister_active_handle()
_before_ calling osdep_evtchn_close() to avoid trying to restrict a
closed fd or some other fd that happens to have the same number.

You are right.  But this slightly weakens the guarantee provided by
xentoolcore_restrict_all.


Now that I look at it, a similar scenario can happen during open. Since the handle is registered before it is actually opened, a concurrent xentoolcore_restrict_all() will try to restrict a handle that it not properly set up.

I think it is OK if xentoolcore_restrict_all() works with any open handle where a handle is defined as open if it has _completed_ the call to e.g. xenevtchn_open() and has not yet called xenevtchn_close().

--
Ross Lagerwall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.