[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 9/9] xen/pciback: Implement PCI reset slot or bus with 'do_flr' SysFS attribute
On 05/12/14 17:22, Konrad Rzeszutek Wilk wrote: > On Fri, Dec 05, 2014 at 10:30:01AM +0000, David Vrabel wrote: >> On 04/12/14 15:39, Alex Williamson wrote: >>> >>> I don't know what workaround you're talking about. As devices are >>> released from the user, vfio-pci attempts to reset them. If >>> pci_reset_function() returns success we mark the device clean, otherwise >>> it gets marked dirty. Each time a device is released, if there are >>> dirty devices we test whether we can try a bus/slot reset to clean them. >>> In the case of assigning a GPU this typically means that the GPU or >>> audio function come through first, there's no reset mechanism so it gets >>> marked dirty, the next device comes through and we manage to try a bus >>> reset. vfio-pci does not have any device specific resets, all >>> functionality is added to the PCI-core, thank-you-very-much. I even >>> posted a generic PCI quirk patch recently that marks AMD VGA PM reset as >>> bad so that pci_reset_function() won't claim that worked. All VGA >>> access quirks are done in QEMU, the kernel doesn't have any business in >>> remapping config space over MMIO regions or trapping other config space >>> backdoors. >> >> Thanks for the info Alex, I hadn't got around to actually looking and >> the vfio-pci code and was just going to what Sander said. >> >> We probably do need to have a more in depth look at now PCI devices and >> handled by both the toolstack and pciback but in the short term I would >> like a simple solution that does not extend the ABI. > > Could you enumerate the 'simple solution' then please? I am having > a frustrating time figuring out what it is that you are proposing. I've posted it repeatedly. David _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |