[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] libxl: tooling expects wrong errno
On Mon, Jun 15, 2020 at 03:59:50PM +0100, Ian Jackson wrote: > Grzegorz Uriasz writes ("[PATCH] libxl: tooling expects wrong errno"): > > When iommu is not enabled for a given domain then pci passthrough > > hypercalls such as xc_test_assign_device return EOPNOTSUPP. > > The code responsible for this is in "iommu_do_domctl" inside > > xen/drivers/passthrough/iommu.c > > This patch fixes the error message reported by libxl when assigning > > pci devices to domains without iommu. > > > > Signed-off-by: Grzegorz Uriasz <gorbak25@xxxxxxxxx> > > Tested-by: Grzegorz Uriasz <gorbak25@xxxxxxxxx> > > --- > > tools/libxl/libxl_pci.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c > > index 957ff5c8e9..bc5843b137 100644 > > --- a/tools/libxl/libxl_pci.c > > +++ b/tools/libxl/libxl_pci.c > > @@ -1561,7 +1561,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t > > domid, > > LOGD(ERROR, domid, > > "PCI device %04x:%02x:%02x.%u %s?", > > pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func, > > - errno == ENOSYS ? "cannot be assigned - no IOMMU" > > + errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU" > > : "already assigned to a different guest"); > > goto out; > > } > > Thanks. I have addressed some Xen IOMMU maintainers. Can you confirm > whether this is right ? Not an IOMMU maintainer myself, but I've taken a look at the code and I think Grzegorz is right. iommu_do_domctl will return -EOPNOTSUPP if the IOMMU is not enabled for the domain. Another option would be to check for EBUSY (which will certainly be returned when the device is busy) and log the error code with a message when it's different than EBUSY? There are many possible error here, for example the device itself might not be behind an IOMMU, in which case Xen will return -ENODEV at least on the Intel case. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |