[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0 of 2] x86/mm: Unsharing ENOMEM handling


  • To: "Tim Deegan" <tim@xxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Thu, 15 Mar 2012 09:44:04 -0700
  • Cc: andres@xxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxx, adin@xxxxxxxxxxxxxx
  • Delivery-date: Thu, 15 Mar 2012 16:44:33 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=ApACeRMds4zdskyO4qRv8+9YJ75AjC6XyiKb2BUDCA7S gxvM1kpFiwpQio3048f2GdHqDy1QUcJCfAMEVc5o8nCSg0KSkNsDgvB73sxbX4ct 6FdRKg7KTjyWTRZS7vNNrQpWSyYcadd9DgWeOCnbHfcAe9jKM3NxWnA8Gj90oVI=
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

> At 07:35 -0700 on 15 Mar (1331796917), Andres Lagar-Cavilla wrote:
>> > At 11:29 -0400 on 12 Mar (1331551776), Andres Lagar-Cavilla wrote:
>> >> These two patches were originally posted on Feb 15th as part of a
>> larger
>> >> series.
>> >>
>> >> They were left to simmer as a discussion on wait queues took
>> precedence.
>> >>
>> >> Regardless of the ultimate fate of wait queues, these two patches are
>> >> necessary
>> >> as they solve some bugs on the memory sharing side. When unsharing
>> >> fails,
>> >> domains would spin forever, hosts would crash, etc.
>> >>
>> >> The patches also clarify the semantics of unsharing, and comment how
>> >> it's
>> >> handled.
>> >>
>> >> Two comments against the Feb 15th series taken care of here:
>> >>  - We assert that the unsharing code can only return success or
>> ENOMEN.
>> >>  - Acked-by Tim Deegan added to patch #1
>> >
>> > Applied, thanks.
>> >
>> > I'm a bit uneasy about the way this increases the amount of
>> boilerplate
>> > and p2m-related knowledge that's needed at call sites, but it fixes
>> real
>> > problems and I can't see an easy way to avoid it.
>> >
>> Agreed, completely. Luckily it's all internal to the hypervisor.
>>
>> I'm gonna float an idea right now, risking egg-in-the-face again. Our
>> main
>> issue is that going to sleep on a wait queue is disallowed in an atomic
>> context. For good reason, the vcpu goes to sleep holding locks.
>> Therefore,
>> we can't magically hide all the complexity behind get_gfn, and callers
>> need to know things they shouldn't.
>>
>> However, sleeping only deadlocks if the "waker upper" would need to grab
>> any of those locks.
>
> Tempting.  But I don't think it will fly -- in general dom0 tools should
> be able to crash and restart without locking up Xen.   And anything that
> causes a VCPU to sleep forever with a lock held is likely to do that.

If a mem event tool crashes, the domain is rather hosed. The restart would
have a hard hard time figuring out events that were pulled from the ring
but not yet acted upon.

So, assume some toolstack element is alerted of the helper crash. The only
way to go is to crash the domain. As long as the sleepers are not holding
global locks, we should be good. A sleeper holding a global lock is a
definitive no-no. We can strengthen this from a rule to actual runtime
enforcement (if we go down this crazy path). There is still the issue of
the domain cleanup process being preempted by the asleep lock holders.

>
> Also we have to worry about anything that has to happen before the
> waker-upper gets to run -- for example, on a single-CPU Xen, any attempt
> by any code to get the lock that's held by the sleeper will hang forever
> because the waker-upper can't be scheduled.
>
> We could have some sort of time-out-and-crash-the-domain safety net, I
> guess, but part of the reason for wanting wait queues was avoiding
> plumbing all those error paths.
>
> Maybe we could just extend the idea and have the slow path of the
> spinlock code dump the caller on a wait queue in the hope that someone
> else will sort it out. :)

What about the second (or first nested) spinlock?
Andres
>
> Cheers,
>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.