[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Need help with fixing the Xen waitqueue feature


  • To: Olaf Hering <olaf@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.xen@xxxxxxxxx>
  • Date: Tue, 08 Nov 2011 22:05:41 +0000
  • Cc:
  • Delivery-date: Tue, 08 Nov 2011 14:09:10 -0800
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcyeYo3OrP4N8Ihww06cxS9uKo6mbg==
  • Thread-topic: [Xen-devel] Need help with fixing the Xen waitqueue feature

On 08/11/2011 21:20, "Olaf Hering" <olaf@xxxxxxxxx> wrote:

> Another thing is that sometimes the host suddenly reboots without any
> message. I think the reason for this is that a vcpu whose stack was put
> aside and that was later resumed may find itself on another physical
> cpu. And if that happens, wouldnt that invalidate some of the local
> variables back in the callchain? If some of them point to the old
> physical cpu, how could this be fixed? Perhaps a few "volatiles" are
> needed in some places.

>From how many call sites can we end up on a wait queue? I know we were going
to end up with a small and explicit number (e.g., in __hvm_copy()) but does
this patch make it a more generally-used mechanism? There will unavoidably
be many constraints on callers who want to be able to yield the cpu. We can
add Linux-style get_cpu/put_cpu abstractions to catch some of them. Actually
I don't think it's *that* common that hypercall contexts cache things like
per-cpu pointers. But every caller will need auditing, I expect.

A sudden reboot is very extreme. No message even on a serial line? That most
commonly indicates bad page tables. Most other bugs you'd at least get a
double fault message.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.