[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/pci: make bus notifier handler return sane values



>>> On 17.08.11 at 16:57, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
> On Wed, Aug 17, 2011 at 09:32:32AM +0100, Jan Beulich wrote:
>> Notifier functions are expected to return NOTIFY_* codes, not -E...
>> ones. In particular, since the respective hypercalls failing is not
>> fatal to the operation of the Dom0 kernel, it must be avoided to
> 
> So if we fail adding a PCI device, won't we be unable to actually
> setup its MSI?

Sure (and you also can't pass through such a device), but that's no
reason to fail the notification chain. For one, you don't know whether
the driver is actually going to use MSI. And even if you knew, it would
be bad behavior imo. Plus even if you want to fail the notifier chain,
just returning a -E... value here is wrong; notifier_from_errno() ought
to be used then.

Jan

>> return negative values here as those would make it appear as if
>> NOTIFY_STOP_MASK wa set, suppressing further notification calls to
>> other interested parties (which is also why we don't want to use
>> notifier_from_errno() here).
>> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
>> 
>> ---
>>  drivers/xen/pci.c |   11 +++++------
>>  1 file changed, 5 insertions(+), 6 deletions(-)
>> 
>> --- 3.1-rc2/drivers/xen/pci.c
>> +++ 3.1-rc2-xen-pci-bus-notifier/drivers/xen/pci.c
>> @@ -86,23 +86,22 @@ static int xen_pci_notifier(struct notif
>>                          unsigned long action, void *data)
>>  {
>>      struct device *dev = data;
>> -    int r = 0;
>>  
>>      switch (action) {
>>      case BUS_NOTIFY_ADD_DEVICE:
>> -            r = xen_add_device(dev);
>> +            xen_add_device(dev);
>>              break;
>>      case BUS_NOTIFY_DEL_DEVICE:
>> -            r = xen_remove_device(dev);
>> +            xen_remove_device(dev);
>>              break;
>>      default:
>> -            break;
>> +            return NOTIFY_DONE;
>>      }
>>  
>> -    return r;
>> +    return NOTIFY_OK;
>>  }
>>  
>> -struct notifier_block device_nb = {
>> +static struct notifier_block device_nb = {
>>      .notifier_call = xen_pci_notifier,
>>  };
>>  
>> 
>> 
>> 
> 
>> Notifier functions are expected to return NOTIFY_* codes, not -E...
>> ones. In particular, since the respective hypercalls failing is not
>> fatal to the operation of the Dom0 kernel, it must be avoided to
>> return negative values here as those would make it appear as if
>> NOTIFY_STOP_MASK wa set, suppressing further notification calls to
>> other interested parties (which is also why we don't want to use
>> notifier_from_errno() here).
>> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
>> 
>> ---
>>  drivers/xen/pci.c |   11 +++++------
>>  1 file changed, 5 insertions(+), 6 deletions(-)
>> 
>> --- 3.1-rc2/drivers/xen/pci.c
>> +++ 3.1-rc2-xen-pci-bus-notifier/drivers/xen/pci.c
>> @@ -86,23 +86,22 @@ static int xen_pci_notifier(struct notif
>>                          unsigned long action, void *data)
>>  {
>>      struct device *dev = data;
>> -    int r = 0;
>>  
>>      switch (action) {
>>      case BUS_NOTIFY_ADD_DEVICE:
>> -            r = xen_add_device(dev);
>> +            xen_add_device(dev);
>>              break;
>>      case BUS_NOTIFY_DEL_DEVICE:
>> -            r = xen_remove_device(dev);
>> +            xen_remove_device(dev);
>>              break;
>>      default:
>> -            break;
>> +            return NOTIFY_DONE;
>>      }
>>  
>> -    return r;
>> +    return NOTIFY_OK;
>>  }
>>  
>> -struct notifier_block device_nb = {
>> +static struct notifier_block device_nb = {
>>      .notifier_call = xen_pci_notifier,
>>  };
>>  
> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx 
>> http://lists.xensource.com/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.