[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] mem_event: use wait queue when ring is full

  • To: adin@xxxxxxxxxxxxxx
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Thu, 12 Jan 2012 11:22:50 -0800
  • Cc: andres@xxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, tim@xxxxxxx, olaf@xxxxxxxxx
  • Delivery-date: Thu, 12 Jan 2012 19:23:22 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=Yc4353tnr2yov9q51Kymnr3nbU2bokgFeaJzgH6vA8Bv o43O4sHJUyQt67aJRf8Wyh/6cgQjqs/qIMhYS92YbRXQXYSKDSKhkd7FYeMURExb hUklctJGdFD08CdSXz8v8SU4zZnYMMrFsDAdzfEogEb8RZBh3+T0Dp9lVBs86lk=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> I didn't spend a lot of time diagnosing exactly what was going wrong
> with the patch.  I did have some local patches applied (in the process
> of being submitted to the list) and some debug changes to ensure that
> the correct code paths were hit, so it's quite possible that it may
> have been my mistake. If so, I apologize. I didn't want to spend a lot
> of time debugging and I'd had a similar experience with waitqueues in
> the fall.
> As Andres pointed out, we spent time merging our local approach into
> your patch and testing that one. As a result of the combination, I
> also did a few interface changes to ensure that callers use the
> mem_event code correctly (i.e. calls to wake() are handled internally,
> rather than relying on callers), and dropped some of the complexity of
> accounting separately for foreign mappers.  With the waitqueue
> failsafe in place, I don't think that's necessary.  Anyways, I tried
> to preserve the spirit of your code, and would love to hear thoughts.
> We'll be doing more testing today to ensure that we've properly
> exercised all the different code paths (including wait queues).

It works.

1. Fire off 1GB HVM with PV drivers. Enable balloon
2. Fire off xenpaging
3. xenstore write memory/target-tot_pages 524288
(wait until everything is paged)
4. xenstore write memory/target 524288

No crashes, neither domain nor host nor xenpaging. Phew ;)

Olaf, let us know if you have further concerns, afaict the patch is ready
for showtime.

> Cheers,
> -Adin
>>>> What we did is take this patch, amalgamate it with some bits from our
>>>> ring
>>>> management approach. We're ready to submit that, along with a
>>>> description
>>>> of how we test it. It works for us, and it involves wait queue's for
>>>> corner cases.
>>> Now if the patch you just sent out uses wait queues as well, and using
>>> wait queues causes sudden host reboots for reasons not yet known, how
>>> is
>>> your patch any better other that the reboots dont appear to happen
>>> anymore?

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.