|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH 00/19] GICv4 Support for Xen
Hi Mykyta, Thank you for this patch series. I'll go through it and follow up with comments shortly. On Tue, Feb 3, 2026 at 12:02 PM Bertrand Marquis <Bertrand.Marquis@xxxxxxx> wrote: > > Hi Mykyta, > > We have a number of series from you which have not been merged yet and > reviewing them all in parallel might be challenging. > > Would you mind giving us a status and maybe priorities on them. > > I could list the following series: > - GICv4 > - CPU Hotplug on arm > - PCI enumeration on arm > - IPMMU for pci on arm > - dom0less for pci passthrough on arm > - SR-IOV for pvh > - SMMU for pci on arm > - MSI injection on arm > - suspend to ram on arm > > There might be others feel free to complete the list. > > On GICv4... > > > On 2 Feb 2026, at 17:14, Mykyta Poturai <Mykyta_Poturai@xxxxxxxx> wrote: > > > > This series introduces GICv4 direct LPI injection for Xen. > > > > Direct LPI injection relies on the GIC tracking the mapping between > > physical and > > virtual CPUs. Each VCPU requires a VPE that is created and registered with > > the > > GIC via the `VMAPP` ITS command. The GIC is then informed of the current > > VPE-to-PCPU placement by programming `VPENDBASER` and `VPROPBASER` in the > > appropriate redistributor. LPIs are associated with VPEs through the > > `VMAPTI` > > ITS command, after which the GIC handles delivery without trapping into the > > hypervisor for each interrupt. > > > > When a VPE is not scheduled but has pending interrupts, the GIC raises a > > per-VPE > > doorbell LPI. Doorbells are owned by the hypervisor and prompt rescheduling > > so > > the VPE can drain its pending LPIs. > > > > Because GICv4 lacks a native doorbell invalidation mechanism, this series > > includes a helper that invalidates doorbell LPIs via synthetic “proxy” > > devices, > > following the approach used until GICv4.1. > > > > All of this work is mostly based on the work of Penny Zheng > > <penny.zheng@xxxxxxx> and Luca Fancellu <luca.fancellu@xxxxxxx>. And also > > from > > Linux patches by Mark Zyngier. > > > > Some patches are still a little rough and need some styling fixes and more > > testing, as all of them needed to be carved line by line from a giant ~4000 > > line > > patch. This RFC is directed mostly to get a general idea if the proposed > > approach is suitable and OK with everyone. And there is still an open > > question > > of how to handle Signed-off-by lines for Penny and Luca, since they have not > > indicated their preference yet. > > I would like to ask how much performance benefits you could > have with this. > Adding GICv4 support is adding a lot of code which will have to be maintained > and tested and there should be a good improvement to justify this. > > Did you do some benchmarks ? what are the results ? One more benchmarking note (and rationale): for meaningful performance testing it may be necessary to disable WFI trapping (boot Xen with `vwfi=native`). If WFI is trapped, each guest idle instruction causes a VM-exit, and Xen typically deschedules the vCPU. This makes the vCPU become "non-resident" more often, so subsequent wakeups (e.g. vSGI/vLPI) tend to go through a slower host-mediated path (waking the vCPU thread via the scheduler and performing extra state transitions) instead of letting the hardware wake and deliver to a running guest quickly. For this reason it may be worth conditionally recommending (or even auto-selecting) `vwfi=native` when direct injection is enabled for a vCPU, so measurements reflect the actual delivery fast-path rather than exit/scheduling overhead. --- One more suggestion: it may be worth adding this as a small patch in the series (or at least documenting it prominently). When direct injection is enabled for a vCPU, trapping WFI can skew both behaviour and benchmarks by pushing the vCPU into a "non-resident" state more often and forcing wakeups to go through the host/scheduler path. A conditional recommendation (or even auto-selecting `vwfi=native` in that mode) would help keep the fast-path measurable and predictable. Best regards, Mykola > > At the time where we started to work on that at Arm, we ended up in the > conclusion > that the complexity in Xen compared to the benefit was not justifying it > hence why > this work was stopped in favor of other features that we thought would be more > beneficial to Xen (like PCI passthrough or SMMUv3)> > Cheers > Bertrand >
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |