[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 1/2] xen/arm: Improve handling of nr_spis
On 11/03/2025 10:30, Bertrand Marquis wrote: > > > Hi Michal, > >> On 11 Mar 2025, at 10:04, Michal Orzel <michal.orzel@xxxxxxx> wrote: >> >> At the moment, we print a warning about max number of IRQs supported by >> GIC bigger than vGIC only for hardware domain. This check is not hwdom >> special, and should be made common. Also, in case of user not specifying >> nr_spis for dom0less domUs, we should take into account max number of >> IRQs supported by vGIC if it's smaller than for GIC. >> >> Introduce VGIC_MAX_IRQS macro and use it instead of hardcoded 992 value. >> Fix calculation of nr_spis for dom0less domUs and make the GIC/vGIC max >> IRQs comparison common. >> >> Signed-off-by: Michal Orzel <michal.orzel@xxxxxxx> >> --- >> xen/arch/arm/dom0less-build.c | 2 +- >> xen/arch/arm/domain_build.c | 9 ++------- >> xen/arch/arm/gic.c | 3 +++ >> xen/arch/arm/include/asm/vgic.h | 3 +++ >> 4 files changed, 9 insertions(+), 8 deletions(-) >> >> diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c >> index 31f31c38da3f..9a84fee94119 100644 >> --- a/xen/arch/arm/dom0less-build.c >> +++ b/xen/arch/arm/dom0less-build.c >> @@ -1018,7 +1018,7 @@ void __init create_domUs(void) >> { >> int vpl011_virq = GUEST_VPL011_SPI; >> >> - d_cfg.arch.nr_spis = gic_number_lines() - 32; >> + d_cfg.arch.nr_spis = min(gic_number_lines(), VGIC_MAX_IRQS) - >> 32; > > I would suggest to introduce a static inline gic_nr_spis in a gic header ... Why GIC and not vGIC? This is domain's nr_spis, so vGIC. But then, why static inline if the value does not change and is domain agnostic? I struggle to find a good name for this macro. Maybe (in vgic.h): #define vgic_def_nr_spis (min(gic_number_lines(), VGIC_MAX_IRQS) - 32) to denote default nr_spis if not set by the user? > >> >> /* >> * The VPL011 virq is GUEST_VPL011_SPI, unless direct-map is >> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c >> index 7cc141ef75e9..b99c4e3a69bf 100644 >> --- a/xen/arch/arm/domain_build.c >> +++ b/xen/arch/arm/domain_build.c >> @@ -2371,13 +2371,8 @@ void __init create_dom0(void) >> >> /* The vGIC for DOM0 is exactly emulating the hardware GIC */ >> dom0_cfg.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE; >> - /* >> - * Xen vGIC supports a maximum of 992 interrupt lines. >> - * 32 are substracted to cover local IRQs. >> - */ >> - dom0_cfg.arch.nr_spis = min(gic_number_lines(), (unsigned int) 992) - >> 32; >> - if ( gic_number_lines() > 992 ) >> - printk(XENLOG_WARNING "Maximum number of vGIC IRQs exceeded.\n"); >> + /* 32 are substracted to cover local IRQs */ >> + dom0_cfg.arch.nr_spis = min(gic_number_lines(), VGIC_MAX_IRQS) - 32; > > and reuse it here to make sure the value used is always the same. > >> dom0_cfg.arch.tee_type = tee_get_type(); >> dom0_cfg.max_vcpus = dom0_max_vcpus(); >> >> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c >> index acf61a4de373..e80fe0ca2421 100644 >> --- a/xen/arch/arm/gic.c >> +++ b/xen/arch/arm/gic.c >> @@ -251,6 +251,9 @@ void __init gic_init(void) >> panic("Failed to initialize the GIC drivers\n"); >> /* Clear LR mask for cpu0 */ >> clear_cpu_lr_mask(); >> + >> + if ( gic_number_lines() > VGIC_MAX_IRQS ) >> + printk(XENLOG_WARNING "Maximum number of vGIC IRQs exceeded\n"); > > I am a bit unsure with this one. > If this is the case, it means any gicv2 or gicv3 init detected an impossible > value and > any usage of gic_number_lines would be using an impossible value. Why impossible? GIC can support up to 1020 IRQs. Our vGIC can support up to 992 IRQs. > > Shouldn't this somehow result in a panic as the gic detection was wrong ? > Do you think we can continue to work safely if this value is over > VGIC_MAX_IRQS. > There are other places using gic_number_lines like in irq.c. I can add a panic, sure. This would be a change in behavior because we used this check for hwdom unconditionally. I'd need others opinion for this one. ~Michal
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |