[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/6] xl: Add commands for hiding and unhiding pcie passthrough devices
I haven't reviewed the code in detail, but I have some questions regarding the design. See the end of this email. On Tue, Jun 27, 2017 at 12:14:54PM -0500, Venu Busireddy wrote: > > +static void pciassignable_list_hidden(void) > +{ > + libxl_device_pci *pcidevs; > + int num, i; > + > + pcidevs = libxl_device_pci_assignable_list(ctx, &num); > + > + if ( pcidevs == NULL ) Coding style. > + return; > + for (i = 0; i < num; i++) { > + if (libxl_device_pci_assignable_is_hidden(ctx, &pcidevs[i])) > + printf("%04x:%02x:%02x.%01x\n", > + pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, > pcidevs[i].func); > + libxl_device_pci_dispose(&pcidevs[i]); > + } > + free(pcidevs); > +} > + > +int main_pciassignable_list_hidden(int argc, char **argv) > +{ > + int opt; > + > + SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list-hidden", 0) { > + /* No options */ > + } > + > + pciassignable_list_hidden(); > + return 0; > +} > + > +static int pciassignable_hide(const char *bdf) > +{ > + libxl_device_pci pcidev; > + XLU_Config *config; > + int r = EXIT_SUCCESS; > + > + libxl_device_pci_init(&pcidev); > + > + config = xlu_cfg_init(stderr, "command line"); > + if (!config) { > + perror("xlu_cfg_init"); > + exit(-1); If you don't want EXIT_FAILURE, please document these exit values somewhere -- manpage would be a good place. > + } > + > + if (xlu_pci_parse_bdf(config, &pcidev, bdf)) { > + fprintf(stderr, "pci-assignable-hide: malformed BDF specification > \"%s\"\n", bdf); > + exit(2); > + } > + > + if (libxl_device_pci_assignable_hide(ctx, &pcidev)) > + r = EXIT_FAILURE; > + > + libxl_device_pci_dispose(&pcidev); > + xlu_cfg_destroy(config); > + > + return r; > +} > + > +int main_pciassignable_hide(int argc, char **argv) > +{ > + int opt; > + const char *bdf = NULL; > + > + SWITCH_FOREACH_OPT(opt, "", NULL, "main_pciassignable_hide", 1) { > + /* No options */ > + } > + > + bdf = argv[optind]; > + > + if (pciassignable_hide(bdf)) > + return EXIT_FAILURE; > + > + return EXIT_SUCCESS; > +} [...] > diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c > index 89c2b25..10a48a9 100644 > --- a/tools/xl/xl_vmcontrol.c > +++ b/tools/xl/xl_vmcontrol.c > @@ -966,6 +966,15 @@ start: > LOG("Waiting for domain %s (domid %u) to die [pid %ld]", > d_config.c_info.name, domid, (long)getpid()); > > + ret = libxl_reg_aer_events_handler(ctx, domid); > + if (ret) { > + /* > + * This error may not be severe enough to fail the creation of the > VM. > + * Log the error, and continue with the creation. > + */ > + LOG("libxl_reg_aer_events_handler() failed, ret = 0x%08x", ret); > + } > + First thing this suggests the ordering of this patch series is wrong -- you need to put the patch that implements the new function before this. The other thing you need to be aware is that if the user chooses to not use a daemonised xl, he / she doesn't get a chance to handle these events. This is potentially problematic for driver domains. You probably want to also modify xl devd command. Also on the subject, what's your thought on driver domain? I'm not sure if a driver domain has the permission to kill the guest. > ret = libxl_evenable_domain_death(ctx, domid, 0, &deathw); > if (ret) goto out; > > @@ -993,6 +1002,7 @@ start: > LOG("Domain %u has shut down, reason code %d 0x%x", domid, > event->u.domain_shutdown.shutdown_reason, > event->u.domain_shutdown.shutdown_reason); > + libxl_unreg_aer_events_handler(ctx, domid); > switch (handle_domain_death(&domid, event, &d_config)) { > case DOMAIN_RESTART_SOFT_RESET: > domid_soft_reset = domid; > @@ -1059,6 +1069,7 @@ start: > > case LIBXL_EVENT_TYPE_DOMAIN_DEATH: > LOG("Domain %u has been destroyed.", domid); > + libxl_unreg_aer_events_handler(ctx, domid); > libxl_event_free(ctx, event); > ret = 0; > goto out; _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |