[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/6] tools/libxl: Fix datacopier POLLHUP handling to not always be fatal

On 20/02/15 10:43, Ian Campbell wrote:
> On Wed, 2015-02-18 at 16:34 +0000, Andrew Cooper wrote:
>> POLLIN|POLLHUP is a valid revent to recieve from poll() when the far end has
>> hung up, but data is still available to be read.
>> Currently all POLLHUPs are fatal.  This is a problem when using the legacy
>> stream conversion script which will exit 0 when it has completed its task and
>> has left valid data in the pipe.  In this case, libxl wishes to read until
>> eof, rather than dying because the conversion script exited.
>> Adjustments are as follows:
>>  1. The datacopier pollhup callback changes type to include a return value.
>>  2. datacopier_handle_pollhup() takes revents by pointer, and calls the
>>     datacopier pollhup callback ahead of killing the datacopier.
>>  3. The callback may now indicate that the POLLHUP is not fatal, in which 
>> case
>>     datacopier_handle_pollhup() mask POLLHUP out of revents, and
>>     datacopier_{read,write}able() allowed to proceed as before.
>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
>> CC: Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
>> CC: Wei Liu <wei.liu2@xxxxxxxxxx>
>> ---
>>  tools/libxl/libxl_aoutils.c    |   34 ++++++++++++++++++++++------------
>>  tools/libxl/libxl_bootloader.c |   22 ++++++++++++++++++++--
>>  tools/libxl/libxl_internal.h   |   14 ++++++++++----
>>  3 files changed, 52 insertions(+), 18 deletions(-)
>> diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
>> index a402e5c..f3e5f98 100644
>> --- a/tools/libxl/libxl_aoutils.c
>> +++ b/tools/libxl/libxl_aoutils.c
>> @@ -182,21 +182,31 @@ void libxl__datacopier_prefixdata(libxl__egc *egc, 
>> libxl__datacopier_state *dc,
>>      }
>>  }
>> -static int datacopier_pollhup_handled(libxl__egc *egc,
>> +/*
>> + * Handle a POLLHUP signal on a datacopier fd.  Return boolean indicating
>> + * whether further processing should cease.
> Should it return true to cease? Or false to indicate not to continue?
> (More importantly than here this should be specified precisely in the
> doc header near the callback definition, which it isn't, it just says
> "with its return value").

The sense as stated is correct, but I will spell it out.

>> @@ -603,12 +607,26 @@ static void bootloader_keystrokes_copyfail(libxl__egc 
>> *egc,
>>      libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, keystrokes);
>>      bootloader_copyfail(egc, "bootloader input", bl, 0, onwrite, errnoval);
>>  }
>> +static bool bootloader_keystrokes_pollhup(libxl__egc *egc,
>> +       libxl__datacopier_state *dc, int onwrite)
>> +{
>> +    libxl__bootloader_state *bl = CONTAINER_OF(dc, *bl, keystrokes);
>> +    bootloader_copyfail(egc, "bootloader input", bl, 0, onwrite, -1);
> Is there no valid errno here?

-1 is documented way of indicating a POLLHUP.

> It's a bit of a shame that callers which don't care about specific
> pollhup handling have to provide two practically identical handlers.

Up until this patch, all users either provided no POLLHUP handler, or
provided the same handler for both function pointers.

> Is there any mileage in suggesting that the default callback type used
> for copyfail too might return a bool too in order that they can be
> shared? Even if it must always return True.
> Or perhaps some method to indicate that on pollhup, if pollhip callback
> is NULL, use the regular callback? (where some method might be the
> pollhup_callback==NULL itself?)

Previously, every callback was issued after the datacopier had already
been killed.  Now, the pollhup callback is called before the kill has
occurred, and is able to prevent the kill from happening.

A different and far less invasive approach might be to have a per-fd
revent ignore mask.  This would at least allow the callbacks to be made
when the datacopier is in a consistent state.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.