[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
On 18.06.2024 10:10, Chen, Jiqian wrote: > On 2024/6/17 23:10, Jan Beulich wrote: >> On 17.06.2024 11:00, Jiqian Chen wrote: >>> --- a/tools/include/xen-sys/Linux/privcmd.h >>> +++ b/tools/include/xen-sys/Linux/privcmd.h >>> @@ -95,6 +95,11 @@ typedef struct privcmd_mmap_resource { >>> __u64 addr; >>> } privcmd_mmap_resource_t; >>> >>> +typedef struct privcmd_gsi_from_dev { >>> + __u32 sbdf; >> >> That's PCI-centric, without struct and IOCTL names reflecting this fact. > So, change to privcmd_gsi_from_pcidev ? That's what I'd suggest, yes. But remember that it's the kernel maintainers who have the ultimate say here, as here you're only making a copy of what the canonical header (in the kernel tree) is going to have. >>> + int gsi; >> >> Is "int" legitimate to use here? Doesn't this want to similarly be __u32? > I want to set gsi to negative if there is no record of this translation. There are surely more explicit ways to signal that case? >>> --- a/tools/libs/light/libxl_pci.c >>> +++ b/tools/libs/light/libxl_pci.c >>> @@ -1406,6 +1406,12 @@ static bool pci_supp_legacy_irq(void) >>> #endif >>> } >>> >>> +#define PCI_DEVID(bus, devfn)\ >>> + ((((uint16_t)(bus)) << 8) | ((devfn) & 0xff)) >>> + >>> +#define PCI_SBDF(seg, bus, devfn) \ >>> + ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn))) >> >> I'm not a maintainer of this file; if I were, I'd ask that for readability's >> sake all excess parentheses be dropped from these. > Isn't it a coding requirement to enclose each element in parentheses in the > macro definition? > It seems other files also do this. See tools/libs/light/libxl_internal.h As said, I'm not a maintainer of this code. Yet while I'm aware that libxl has its own CODING_STYLE, I can't spot anything towards excessive use of parentheses there. >>> @@ -1486,6 +1496,18 @@ static void pci_add_dm_done(libxl__egc *egc, >>> goto out_no_irq; >>> } >>> if ((fscanf(f, "%u", &irq) == 1) && irq) { >>> +#ifdef CONFIG_X86 >>> + sbdf = PCI_SBDF(pci->domain, pci->bus, >>> + (PCI_DEVFN(pci->dev, pci->func))); >>> + gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf); >>> + /* >>> + * Old kernel version may not support this function, >> >> Just kernel? > Yes, xc_physdev_gsi_from_dev depends on the function implemented on linux > kernel side. Okay, and when the kernel supports it but the underlying hypervisor doesn't support what the kernel wants to use in order to fulfill the request, all is fine? (See also below for what may be needed in the hypervisor, even if this IOCTL would be satisfied by the kernel without needing to interact with the hypervisor.) >>> + * so if fail, keep using irq; if success, use gsi >>> + */ >>> + if (gsi > 0) { >>> + irq = gsi; >> >> I'm still puzzled by this, when by now I think we've sufficiently clarified >> that IRQs and GSIs use two distinct numbering spaces. >> >> Also, as previously indicated, you call this for PV Dom0 as well. Aiui on >> the assumption that it'll fail. What if we decide to make the functionality >> available there, too (if only for informational purposes, or for >> consistency)? Suddenly you're fallback logic wouldn't work anymore, and >> you'd call ... >> >>> + } >>> +#endif >>> r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq); >> >> ... the function with a GSI when a pIRQ is meant. Imo, as suggested before, >> you strictly want to avoid the call on PV Dom0. >> >> Also for PVH Dom0: I don't think I've seen changes to the hypercall >> handling, yet. How can that be when GSI and IRQ aren't the same, and hence >> incoming GSI would need translating to IRQ somewhere? I can once again only >> assume all your testing was done with IRQs whose numbers happened to match >> their GSI numbers. (The difference, imo, would also need calling out in the >> public header, where the respective interface struct(s) is/are defined.) > I feel like you missed out on many of the previous discussions. > Without my changes, the original codes use irq (read from file > /sys/bus/pci/devices/<sbdf>/irq) to do xc_physdev_map_pirq, > but xc_physdev_map_pirq require passing into gsi instead of irq, so we need > to use gsi whether dom0 is PV or PVH, so for the original codes, they are > wrong. > Just because by chance, the irq value in the Linux kernel of pv dom0 is equal > to the gsi value, so there was no problem with the original pv passthrough. > But not when using PVH, so passthrough failed. > With my changes, I got gsi through function xc_physdev_gsi_from_dev firstly, > and to be compatible with old kernel versions(if the ioctl > IOCTL_PRIVCMD_GSI_FROM_DEV is not implemented), I still need to use the irq > number, so I need to check the result > of gsi, if gsi > 0 means IOCTL_PRIVCMD_GSI_FROM_DEV is implemented I can use > the right one gsi, otherwise keep using wrong one irq. I understand all of this, to a (I think) sufficient degree at least. Yet what you're effectively proposing (without explicitly saying so) is that e.g. struct physdev_map_pirq's pirq field suddenly may no longer hold a pIRQ number, but (when the caller is PVH) a GSI. This may be a necessary adjustment to make (simply because the caller may have no way to express things in pIRQ terms), but then suitable adjustments to the handling of PHYSDEVOP_map_pirq would be necessary. In fact that field is presently marked as "IN or OUT"; when re-purposed to take a GSI on input, it may end up being necessary to pass back the pIRQ (in the subject domain's numbering space). Or alternatively it may be necessary to add yet another sub-function so the GSI can be translated to the corresponding pIRQ for the domain that's going to use the IRQ, for that then to be passed into PHYSDEVOP_map_pirq. > And regarding to the implementation of ioctl IOCTL_PRIVCMD_GSI_FROM_DEV, it > doesn't have any xen heypercall handling changes, all of its processing logic > is on the kernel side. > I know, so you might want to say, "Then you shouldn't put this in xen's > code." But this concern was discussed in previous versions, and since the pci > maintainer disallowed to add > gsi sysfs on linux kernel side, I had to do so. Right, but this is a separate aspect (which we simply need to live with on the Xen side). Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |