[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 19/28] tools/libxl: Convert a legacy stream if needed



Andrew Cooper writes ("[PATCH v3 19/28] tools/libxl: Convert a legacy stream if 
needed"):
> For backwards compatibility, a legacy stream needs converting before
> it can be read by the v2 stream logic.
> 
> This causes the v2 stream logic to need to juggle two parallel tasks.
> check_all_finished() is introduced for the purpose of joining the
> tasks in both success and error cases.
...
> +    /* If we started a conversion helper, we took ownership of its carefd. */
> +    if (stream->chs.v2_carefd)
> +        libxl__carefd_close(stream->chs.v2_carefd);

This would be more obviously correct (or at least more obviously never
leak a carefd) if you set v2_carefd to NULL here, and asserted its
NULLness at the end of check_all_finished just before making the
callback.

(I looked to see if you initialised it to NULL as well; you don't, but
the existing code is not consistent about whether it works on systems
where all-bits-0 is not NULL, and I think it unlikely that Xen would
ever be made to work on such a system.)

> +    /* Don't fire the callback until all our parallel tasks have stopped. */
> +    if (!libxl__stream_read_inuse(stream) &&
> +        !libxl__conversion_helper_inuse(&stream->chs))
> +        stream->completion_callback(egc, stream, stream->rc);

I think this would be clearer if the sense of the if was reversed, so:

  +    /* Don't fire the callback until all our parallel tasks have stopped. */
  +    if (libxl__stream_read_inuse(stream) ||
  +        libxl__conversion_helper_inuse(&stream->chs))
  +        return;
  +
  +    /* At last! */
  +    stream->completion_callback(egc, stream, stream->rc);

But this is a matter of taste, so feel free to leave it as it is.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.