[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 06/17] libxl: convert libxl__device_disk_local_attach to an async op



Ian Jackson writes ("Re: [PATCH v9 06/17] libxl: convert 
libxl__device_disk_local_attach to an async op"):
> Roger Pau Monne writes ("Re: [PATCH v9 06/17] libxl: convert 
> libxl__device_disk_local_attach to an async op"):
> > [stuff]
> 
> Thanks.
> 
> > Ian Jackson wrote:
> > >> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> > > ...
> > >> @@ -2191,6 +2202,7 @@ char * libxl__device_disk_local_attach(libxl__gc 
> > >> *gc,
> > >>              default:
> > >>                  LIBXL__LOG(ctx, LIBXL__LOG_ERROR,
> > >>                             "unrecognized disk format: %d", 
> > >> disk->format);
> > >> +                rc = ERROR_FAIL;
> > >>                  break;
> > > 
> > > Why `break' and not `goto out' ?  (Here and later.)
> > > 
> > > Perhaps this would be clear from context.
> > 
> > This is part of the original code, and I don't think it's a good idea to
> > add more patches to my series, if we want to get them in for 4.2.
> 
> Perhaps this would have been clearer in context.  Can I have a git
> branch to look at next time pretty please ? :-)  (I see you're working
> on that, great, thanks.)

I have looked at this in context and it doesn't seem right to me.  Or
at least not clearly right.

`break' here ends up falling out of the switch and executes this:

     if (disk->vdev != NULL) {
         rc = libxl__device_from_disk(gc, LIBXL_TOOLSTACK_DOMID, disk, &device);
         if (rc < 0)
             goto out;

In the general case vdev will be non-0, so the rc = ERROR_FAIL is
lost.

I think you can fix thi sin the same patch as you add the assignments
to rc, changing the error breaks to `goto out'.  Would that make the
code right in your opinion ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.