[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V4 2/2] xenbus: delay xenbus frontend resume if xenstored is not running
On Thu, May 16, 2013 at 03:59:48PM +0100, Aurelien Chartier wrote: > On 16/05/13 14:32, Jan Beulich wrote: > >>>> On 16.05.13 at 14:34, Aurelien Chartier <aurelien.chartier@xxxxxxxxxx> > >>>> wrote: > >> @@ -440,6 +467,13 @@ static int __init xenbus_probe_frontend_init(void) > >> > >> register_xenstore_notifier(&xenstore_notifier); > >> > >> + xenbus_frontend_wq = create_workqueue("xenbus_frontend"); > >> + if (!xenbus_frontend_wq) { > >> + printk(KERN_ERR "%s: create " > >> + "xenbus_frontend_workqueue failed\n", __func__); > > pr_err() should be the norm these days. > > > > And personally I consider it quite bad a habit to put function names > > in non-debugging log messages - this doesn't really help with > > anything (as long as the rest of the message is meaningful), but > > clutters the log. > I was using the same format as pciback error handling, but I can switch > to pr_err. > > > And finally, you need to do proper error handling here - this code > > can be built as a module, and hence leaving notifier and bus > > registered upon failure sets up the kernel for crashing. Moving > > the code ahead of register_xenstore_notifier() will take care of > > one half of the problem, but you'll nevertheless will need to call > > bus_unregister() if you really intend to make this a fatal error > > condition (which by itself is questionable since in most scenarios > > you won't need the work queue at all). > Right, I will move the error handling to the xenbus frontend resume > function. Any ETA when the v5 with these changes will be posted? Thanks. > > Aurelien. > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |