[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 7/8] golang/xenlight: Notify xenlight of SIGCHLD



libxl forks external processes and waits for them to complete; it
therefore needs to be notified when children exit.

In absence of instructions to the contrary, libxl sets up its own
SIGCHLD handlers.

Golang always unmasks and handles SIGCHLD itself.  libxl thankfully
notices this and throws an assert() rather than clobbering SIGCHLD
handlers.

Tell libxl that we'll be responsible for getting SIGCHLD notifications
to it.  Arrange for a channel in the context to receive notifications
on SIGCHLD, and set up a goroutine that will pass these on to libxl.

NB that every libxl context needs a notification; so multiple contexts
will each spin up their own goroutine when opening a context, and shut
it down on close.

libxl also wants to hold on to a const pointer to
xenlight_childproc_hooks rather than do a copy; so make a global
structure in C space and initialize it once on library creation.

While here, add a few comments to make the context set-up a bit easier
to follow.

Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
---
v2:
- Fix unsafe libxl_childproc_hooks pointer behavior
- Close down the SIGCHLD handler first, and make sure it's exited
  before closing the context
- Explicitly decide to have a separate goroutine per ctx

NB that due to a bug in libxl, this will hang without Ian's "[PATCH v2
00/10] libxl: event: Fix hang for some applications" series.

CC: Nick Rosbrook <rosbrookn@xxxxxxxxxxxx>
CC: Ian Jackson <ian.jackson@xxxxxxxxxx>
---
 tools/golang/xenlight/xenlight.go | 72 ++++++++++++++++++++++++++++++-
 1 file changed, 70 insertions(+), 2 deletions(-)

diff --git a/tools/golang/xenlight/xenlight.go 
b/tools/golang/xenlight/xenlight.go
index 662b266250..c462e4bb42 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -20,6 +20,8 @@ package xenlight
 #cgo LDFLAGS: -lxenlight -lyajl -lxentoollog
 #include <stdlib.h>
 #include <libxl.h>
+
+libxl_childproc_hooks xenlight_childproc_hooks;
 */
 import "C"
 
@@ -33,6 +35,9 @@ import "C"
 
 import (
        "fmt"
+       "os"
+       "os/signal"
+       "syscall"
        "unsafe"
 )
 
@@ -72,10 +77,49 @@ func (e Error) Error() string {
        return fmt.Sprintf("libxl error: %d", e)
 }
 
+func init() {
+       // libxl for some reason wants to:
+       // 1. Retain a copy to this pointer as long as the context is open, and
+       // 2. Not free it when it's done
+       //
+       // Rather than alloc and free multiple copies, just keep a single
+       // static copy in the C space (since C code isn't allowed to retain 
pointers
+       // to go code), and initialize it once.
+       C.xenlight_childproc_hooks.chldowner = C.libxl_sigchld_owner_mainloop
+}
+
 // Context represents a libxl_ctx.
 type Context struct {
-       ctx    *C.libxl_ctx
-       logger *C.xentoollog_logger_stdiostream
+       ctx         *C.libxl_ctx
+       logger      *C.xentoollog_logger_stdiostream
+       sigchld     chan os.Signal
+       sigchldDone chan bool
+}
+
+// Golang always unmasks SIGCHLD, and internally has ways of
+// distributing SIGCHLD to multiple recipients.  libxl has provision
+// for this model: just tell it when a SIGCHLD happened, and it will
+// look after its own processes.
+//
+// This should "play nicely" with other users of SIGCHLD as long as
+// they don't reap libxl's processes.
+//
+// Every context needs to be notified on each SIGCHLD; so spin up a
+// new goroutine for each context.  If there are a large number of contexts,
+// this means each context will be woken up looking through its own list of 
children.
+//
+// The alternate would be to register a fork callback, such that the
+// xenlight package can make a single list of all children, and only
+// notify the specific libxl context(s) that have children woken.  But
+// it's not clear to me this will be much more work than having the
+// xenlight go library do the same thing; doing it in separate go
+// threads has the potential to do it in parallel.  Leave that as an
+// optimization for later if it turns out to be a bottleneck.
+func sigchldHandler(ctx *Context) {
+       for _ = range ctx.sigchld {
+               go C.libxl_childproc_sigchld_occurred(ctx.ctx)
+       }
+       close(ctx.sigchldDone)
 }
 
 // NewContext returns a new Context.
@@ -89,19 +133,43 @@ func NewContext() (ctx *Context, err error) {
                }
        }()
 
+       // Create a logger
        ctx.logger = C.xtl_createlogger_stdiostream(C.stderr, C.XTL_DEBUG, 0)
 
+       // Allocate a context
        ret := C.libxl_ctx_alloc(&ctx.ctx, C.LIBXL_VERSION, 0,
                (*C.xentoollog_logger)(unsafe.Pointer(ctx.logger)))
        if ret != 0 {
                return ctx, Error(ret)
        }
 
+       // Tell libxl that we'll be dealing with SIGCHLD...
+       C.libxl_childproc_setmode(ctx.ctx, &C.xenlight_childproc_hooks, nil)
+
+       // ...and arrange to keep that promise.
+       ctx.sigchld = make(chan os.Signal, 2)
+       ctx.sigchldDone = make(chan bool, 1)
+       signal.Notify(ctx.sigchld, syscall.SIGCHLD)
+
+       go sigchldHandler(ctx)
+
        return ctx, nil
 }
 
 // Close closes the Context.
 func (ctx *Context) Close() error {
+       // Tell our SIGCHLD notifier to shut down, and wait for it to exit
+       // before we free the context.
+       if ctx.sigchld == nil {
+               signal.Stop(ctx.sigchld)
+               close(ctx.sigchld)
+
+               <-ctx.sigchldDone
+
+               ctx.sigchld = nil
+               ctx.sigchldDone = nil
+       }
+
        if ctx.ctx != nil {
                ret := C.libxl_ctx_free(ctx.ctx)
                if ret != 0 {
-- 
2.24.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.