[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v7 08/10] tools/misc: Add xen-vmtrace tool
On 22/01/2021 15:33, Ian Jackson wrote: > Andrew Cooper writes ("[PATCH v7 08/10] tools/misc: Add xen-vmtrace tool"): >> From: Michał Leszczyński <michal.leszczynski@xxxxxxx> > ... >> + if ( signal(SIGINT, int_handler) == SIG_ERR ) >> + err(1, "Failed to register signal handler\n"); > How bad is it if this signal handler is not effective ? I believe far less so now that I've fixed up everything to use a (fixed) XENMEM_acquire_resource, so Xen doesn't crash if this process dies in the wrong order WRT the domain shutting down. But I would have to defer to Michał on that. >> + if ( xc_vmtrace_disable(xch, domid, vcpu) ) >> + perror("xc_vmtrace_disable()"); > I guess the tracing will remain on, pointlessly, which has a perf > impact but nothing else ? The perf hit is substantial, but it is safe to leave enabled. > How is it possible for the user to clean this up ? For now, enable/disable can only fail with -EINVAL for calls made in the wrong context, so a failure here is benign in practice. I specifically didn't opt for reference counting the enable/disable calls, because there cannot (usefully) be two users of this interface. > > Also: at the very least, you need to trap SIGTERM SIGHUP SIGPIPE. > > It would be good to exit with the right signal by re-raising it. This is example code, not a production utility. Anything more production-wise using this needs to account for the fact that Intel Processor Trace can't pause on a full buffer. (It ought to be able to on forthcoming hardware, but this facility isn't available yet.) The use-cases thus far are always "small delta of execution between introspection events", using a massive buffer as the mitigation for hardware wrapping. No amount of additional code here can prevent stream corruption problems with the buffer wrapping. As a result, it is kept as simple as possible as a demonstration of how to use the API. ~Andrew
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |