[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Wait Queues

On 08/11/2012 15:39, "Andres Lagar-Cavilla" <andreslc@xxxxxxxxxxxxxx> wrote:

>> dom0 vcpu?.
> Uhmm. But it seems there is _some_ method to the madness. Luckily mm locks are
> all taken after the p2m lock (and enforced that way). dom0 can grab ... the
> big domain lock? the grant table lock?
> Perhaps we can categorize locks between reflexive or foreign (not that we have
> abundant space in the spin lock struct to stash more flags) and perform some
> sort of enforcement like what goes on in the mm layer. Xen insults via
> BUG_ON's are a strong conditioning tool for developers. It is certainly
> simpler to tease out the locks that might deadlock dom0 than all possible
> locks, including RCU read-locks.
> What I mean:
> BUG_ON(current->domain != d && lock_is_reflexive)
> An example of a reflexive lock is the per page sharing lock.
> BUG_ON(prepare_to_wait && current->domain->holds_foreign_lock)
> An example of a transitive lock is the gran table lock.
> A third category would entail global locks like the domain list, which are
> identical to a foreign lock wrt to this analysis.
> Another benefit of this is that only reflexive locks need to be made
> sleep-capable, not everything under the sun. I.e. the possibility of livelock
> is corralled to apply only to vcpus of the same domain, and then it's avoided
> by making those lock holders re-schedulable.

This sounds possible. RCU read locks will often count as global locks by the
way, as they are most often used as an alternative to taking a global
spinlock or multi-reader lock. So sleeping in RCU critical regions is
generally not going to be a good idea. Perhaps it will turn out that such
regions don't get in your way too often.

> Andres

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.