[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Use of watch_pipe in xs_handle structure



On Thu, 2014-02-13 at 18:09 -0800, Chris Takemura wrote:
> Hi,
> 
> This message was also posted to the qemu-devel list, but I didn't get any
> reply, and it occurred to me that it might make more sense here.  Sorry if
> you're reading it twice.
> 
> Anyway, I'm trying to debug a problem that causes qemu-dm to lock up with
> Xen HVM domains.  We're using the qemu version that came with Xen 3.4.2.
> I know it's old, but we're stuck with it for a little while yet.

I'm afraid you are unlikely to get much specific interest in a 3.4.x
issue.

At the very least I would suggest an upgrade to 3.4.4.

Otherwise I'd suggest you at least skim the commit logs (at a minimum
for qemu-dm and libxenstore) for any interesting looking related fixes
which you are missing.

> I think the hang is related to thread synchronization and the xenstore,
> but I'm not sure how it all fits together. In particular, I don't
> understand the lines in xs.c that handle the watch_pipe, e.g.:
> 
>         /* Kick users out of their select() loop. */
> 
>         if (list_empty(&h->watch_list) &&
>             (h->watch_pipe[1] != -1))
>             while (write(h->watch_pipe[1], body, 1) != 1)
>                 continue;
> 
> 
> It looks to me like the other thread blocks while reading from the pipe,
> and the write allows it to continue.  But this code seems like it does the
> same thing as the condvar_signal call that comes slightly after, and
> therefore it seems like I could safely #ifndef USE_PTHREAD it out.  Is
> this the case?

I don't think so. The cond var is for libxenstore's internal
synchronisation, while the pipe is there to allow the calling
application to include xenstore in its select or poll based event loop.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.