[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 25/29] libvchan: check for fcntl failures in select-type sample application



On 30/10/13 17:06, Daniel De Graaf wrote:
> On 10/30/2013 03:52 AM, Matthew Daley wrote:
>> Coverity-ID: 1055041
>> Signed-off-by: Matthew Daley <mattjd@xxxxxxxxx>
>> ---
>>   tools/libvchan/node-select.c |    6 ++++--
>>   1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/libvchan/node-select.c b/tools/libvchan/node-select.c
>> index 6c6c19e..c6914ab 100644
>> --- a/tools/libvchan/node-select.c
>> +++ b/tools/libvchan/node-select.c
>> @@ -105,8 +105,10 @@ int main(int argc, char **argv)
>>           exit(1);
>>       }
>>
>> -    fcntl(0, F_SETFL, O_NONBLOCK);
>> -    fcntl(1, F_SETFL, O_NONBLOCK);
>> +    if (fcntl(0, F_SETFL, O_NONBLOCK) == -1 || fcntl(1, F_SETFL,
>> O_NONBLOCK) == -1) {
>> +        perror("fcntl");
>> +        exit(1);
>> +    }
>>
>>       libxenvchan_fd = libxenvchan_fd_for_select(ctrl);
>>       for (;;) {
>>
>
> To be completely correct, a call to F_GETFL would be required first, with
> the result ORed with O_NONBLOCK and passed to F_SETFL. That is a separate
> existing bug in the code, however, so this patch is still an improvement
> as-is.
>
> Is the fcntl on line 156 different in some way that does not trigger this
> Coverity check?
>

Hmm - that error wasn't flagged in the slightest by Coverity. I would
agree that it suffers from the same problem and needs
appropriately-similar fixing.

It is possible Coverity only flagged the first instance of all ignored
return values from fcntl(1,...) library calls.  Some of the checkers
seem to have logic which decides that something consistently
wrong/questionable might be by design.  I suspect that if the above were
fixed, then the latter would identified in the next scan.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.