[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen-pciback: fix up cleanup path when alloc fails
On 12/2/15 4:35 AM, David Vrabel wrote: > On 26/11/15 20:32, Doug Goldstein wrote: >> When allocating a pciback device fails, avoid the possibility of a >> use after free. > > We should not require clearing drvdata for correctness. We should > ensure we retain drvdata for as long as it is needed. > > I note that pcistub_device_release() has: > > kfree(dev_data); > pci_set_drvdata(dev, NULL); > > /* Clean-up the device */ > xen_pcibk_config_free_dyn_fields(dev); > xen_pcibk_config_free_dev(dev); > > Which should (at a minimum) be reordered to move the kfree(dev_data) to > after the calls that require it > > David > I apologize but at this point I'm confused at what action I should be taking. Are you saying NACK to the original patch and suggesting this as the replacement? Or saying that this should be done in addition to the original patch? I created the original patch when looking through the other probe() calls and seeing that they all did pci_set_drvdata() with memory they allocated but probe() failed they ensured that pci_set_drvdata() was cleared. But the behavior in xen-pciback was different. It kfree()'d the memory that passed to pci_set_drvdata() and never set that pointer to NULL. Which could possibly result in a use after free. The use after free doesn't occur today as Konrad pointed out but in the future its possible should some other code changes occur. It was more of a defensive coding patch in the end. I had planned on resubmitting the patch with a reworded commit message after Konrad pointed out there was currently no use after free and retaining the Reviewed-By since the code wouldn't change but if that's not what I should be doing I will gladly go another route. -- Doug Goldstein Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |