|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 28/31] libxl: provide libxl__openpty_*
On Wed, 2012-04-11 at 12:19 +0100, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [PATCH 28/31] libxl: provide
> libxl__openpty_*"):
> > On Tue, 2012-04-10 at 20:08 +0100, Ian Jackson wrote:
> > > General facility for ao operations to open ptys.
> ...
> > > + if (!pid) {
> > > + /* child */
> > > + close(sockets[0]);
> > > + signal(SIGCHLD, SIG_DFL);
> > > +
> > > + for (i=0; i<count; i++) {
> > > + r = openpty(&ptyfds[i][0], &ptyfds[i][1], NULL, termp, winp);
> > > + if (r) { LOGE(ERROR,"openpty failed"); _exit(-1); }
> > > + }
> > > + rc = libxl__sendmsg_fds(gc, sockets[1], "",1,
> > > + 2*count, &ptyfds[0][0], "ptys");
> > > + if (rc) { LOGE(ERROR,"sendmsg to parent failed"); _exit(-1); }
> >
> > Buried in this LOGE is a CTX somewhere, right? Is it valid to keep using
> > it? Perhaps it's OK just for this logging.
>
> This should be documented somewhere. But I see it isn't. The closest
> we have is:
>
> * The child should go on to exec (or exit) soon, and should not make
> * any further libxl event calls in the meantime.
>
> I have clarified this:
>
> * The child should go on to exec (or exit) soon. The child may not
> * make any further calls to libxl infrastructure, except for memory
> * allocation and logging. If the child needs to use xenstore it
> * must open its own xs handle and use it directly, rather than via
> * the libxl event machinery.
Sounds good. I think I had mentally omitted the "event" in the original
too.
> > I suppose we don't need to call the postfork-noexecing handler because
> > we are inside libxl?
>
> No, we don't need to call it because we're fast ("exit ... soon",
> above). The worst case is that we hold onto unwanted fds while the
> openpty helper runs etc.
Oh, yes, of course.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |