[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/7] libxl: fork: Provide ..._always_selective_reap



Jim Fehlig writes ("Re: [PATCH 6/7] libxl: fork: Provide 
..._always_selective_reap"):
> Ian Jackson wrote:
> > +    /* libxl owns SIGCHLD all the time, but it must only reap its own
> > +     * children.  The application will reap its own children
> > +     * synchronously with waitpid, without the assistance of SIGCHLD. */
> > +    libxl_sigchld_owner_libxl_always_selective_reap,
> 
> Should there be some documentation in the opening comments of
> "Subprocess handling"?  E.g. an entry under "For programs which run
> their own children alongside libxl's:"?

Yes.

> BTW, it is not clear to me how to use libxl_childproc_setmode() wrt
> different libxl_ctx.  Currently in the libvirt libxl driver there's a
> driver-wide ctx for more host-centric operations like
> libxl_get_version_info(), libxl_get_free_memory(), etc., and a
> per-domain ctx for domain-specific operations.  The current doc for
> libxl_childproc_setmode() says:

Oh dear.  Can you change it to use the same ctx everywhere ?

If not, I'm afraid, someone needs to
 - keep a record of all the libxl ctxs
 - when the self-pipe triggers, iterate over all the ctxs calling
   libxl_childproc_sigchld_occurred

If that "someone" is libvirt, you'll have to do the self-pipe trick.

It would probably be easier to have libxl maintain a global list of
libxl ctx's for this purpose.

> When calling setmode() on the driver-wide or on each domain-specific
> ctx, I get an assert with this hunk
> 
> libvirtd: libxl_fork.c:241: libxl__sigchld_installhandler: Assertion
> `!sigchld_owner' failed.

The problem is that the SIGCHLD ownership is attached to the
individual ctx.  This could in principle be changed, I think.

I will try to see if I can do that.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.