[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] Shutdown event script for xl toolstack



Joseph Glanville writes ("[RFC] Shutdown event script for xl toolstack"):
> Previously I had a series of out of tree patches to xend that added a
> shutdown event callback on domain death that would call a script with
> the reason why the domain shutdown.
> Basically analagous to this:
> /etc/xen/scripts/shutdown-event $event $domname $domid

I think this is a good idea, although it shouldn't be enabled by
default and I don't see the need to provide an example script.

> My general idea is to add a config option for a path to an event
> handler script:
> on_shutdown_event = "/etc/xen/scripts/shutdown-event"

Yes, this is good.  Shouldn't the argument be a string list so that
you can specify all the arguments to exec() ?  Will the script inherit
the domid or name in an environment variable ?

You need to document this fully I think, in docs/man/xl.cfg.pod.5.

It would also be good for the script to be able to give instructions
to the daemonic xl, for example to cause the daemonic xl to neither
reboot nor preserve the domain.  So if you wanted your domain to get a
different config when it reboots, you write a hook script which
destroys the previous domain and starts the new one by hand.

> Create the event during handle_domain_death in xl_cmdimpl.c and fire
> the script once shutdown reason and action has been parsed.

When you say "create the event" what do you mean ?

> I implemented a hacky version to illustrate my point but I would like
> some advice on how to do this properly and what tools/framework within
> libxl I could leverage to make it cleaner.

This is going in the right direction.  I'll comment in more detail
below.

> A quick overview of the following would help immensely:
> Where to add in a new config option and what steps it goes through to
> get to libxl_domain_config.

This hook script functionality should be part of xl, not part of
libxl, so arguably you shouldn't add it to libxl_domain_config.

Indeed I think it's arguably a mistake that the on_{poweroff,...}
items are in libxl_domain_config rather than in something provided in
xl.

The config parsing is done in parse_config_data.

> How to exec an external script safely, can I use usual fork/exec or
> should I be using libxl__exec or libxl_spawn*?

In xl I think you should use fork/exec.  You may not use any functions
libxl__* as they are private for libxl.

> Best place/way to get the event data, atm handle_domain_death looks
> like an easy target but there might be more elegant ways..

handle_domain_death is called in only one place, the event loop in the
daemonic xl.  And yes, it seems like the right place to do this.

> diff --git a/tools/hotplug/Linux/shutdown-event
> b/tools/hotplug/Linux/shutdown-event

As I say I don't think we need an example like this.

Also do we think asking the user to handle this with a switch in their
event script is really better than providing four separate config
options for separate commands ?  Doing the latter will make the
scripts simpler, which is good because they'll be very annoying to
debug.

> +    char event_name[10];
> +    char event_cmd[100];
> 
>     switch (info->shutdown_reason) {
>     case SHUTDOWN_poweroff:
>         action = d_config->on_poweroff;
> +        strcpy(event_name,"poweroff");

Surely event_name should be
  const char *event_name;
and then you can do
  event_name = "poweroff";

But this goes away if you have four separate hook script config
options.

> +    snprintf(event_cmd, 100, SHUTDOWN_EVENT_SCRIPT, event_name,
> d_config->c_info.name, domid);
> +    ret = system(event_cmd);

This part needs serious work.  We need firstly to define an
interface.  And, I don't really like system().  You should be using
fork/exec.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.