[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] evtchn/fifo: don't corrupt queues if an old tail is linked
>>> On 09.12.13 at 15:43, David Vrabel <david.vrabel@xxxxxxxxxx> wrote: > On 09/12/13 13:10, Jan Beulich wrote: >>>>> On 09.12.13 at 13:56, David Vrabel <david.vrabel@xxxxxxxxxx> wrote: >>> Do you think they should be initialized when an event is (re)bound? >>> Because this would be broken as an unbound event might be an old tail. >> >> But if you don't do this, then you _require_ a set-priority operation, >> yet that one's necessarily non-atomic with the bind. Newly created >> event channels should start out at the default priority irrespective >> of what the underlying tracking structure in the hypervisor was >> used for before. > > Xen can only move an event between queues if that event isn't on a > queue. It is also not notified when an event is removed from a queue. > > The guest can ensure a predictable state by only unbinding events that > are not currently on a queue. e.g., > > /* prevent it becoming LINKED. */ > set_bit(word, MASKED) > /* wait for interrupt handlers to drain event from its queue. */ > while (test_bit(word, LINKED)) > ; > /* Unlinked and masked, safe to unbind. If this port is bound again > it will becoming pending on the correct new queue. */ > unbind() > > There doesn't need to be anything added to Xen to support this. > > The guest may need to defer to wait and unbind to a work queue or similar. I still don't see how the event, after having got de-allocated and re-bound, would end up on the default priority queue. Yet other than for an active event channel, where the guest has to be prepared for it to fire once on the old priority when altering its priority, a freshly bound event channel shouldn't fire on e.g. the lowest or highest priority, as that _may_ confuse the guest. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |