[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] xen/arm: Improve handling of nr_spis


  • To: Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • From: "Orzel, Michal" <michal.orzel@xxxxxxx>
  • Date: Tue, 11 Mar 2025 10:59:58 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=t4i8NrnN2qPNpveCz1AjIt9LUq44osQl1nmbGk2yVJY=; b=HoynZr1qn0npcdhKgxq1tbO8O6HHtfSLPyAwrKbX9L21uU3W1zg/ls+X7H1JH+DpKFestbrCfFK+Z29MmJN3NrYB+bDT5qS21aSunGYrh4WOgZaAyqC6H9VJLEMaUIcwMGsvojTlXSpTySzMXAkxkrmTWWzeAqPA0pvJ/4GjKk8tN+X4d8CTbMhIraOmr07ux9/O+vUaX9DinRsVKTDjxK9ZNtOHFTTeYxSuyJj8RXA6Zn4s4CNx2MnCxhbGWloMGSciPEhNO2/9lpZpavyNkb2R2E6geMkMY2OQMvH0OEmm53Ay0lhl3C86WT7MUal780BOzwK6a/bVLZE7Wujg4w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=fTckzv3AUdFhxyQ6o9SuzOeAvfZjiKNeYDj0TOpV6PEQ2pbVjMTVh7VVeslJO4CfEJWATk4Hnumd071dW3rp1DxuyFL16GsXuV2vnk5sYz5+qRM+VYDgEZzO8puEhpqrGaoYiDIcnkzgHqBSMRBsHsKRiyYlptVRAQfeali5esx3zrdUVfTtq97l89LFJfO+v6cL3HZxLYHgNUyiO4tMIV9+wQ5VfLwWzsYSVCTxRl4wdiaiH+QPvT1Idsb/SYBI5J0X4mYQxOqBM3qoZj1Dd/UhRvKGXCg9U6Up3tZrVho5BX0QaxcS1v14WzF7nHRxbjNq5DHuNC2yPnEQIODoKA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Tue, 11 Mar 2025 10:00:15 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


On 11/03/2025 10:30, Bertrand Marquis wrote:
> 
> 
> Hi Michal,
> 
>> On 11 Mar 2025, at 10:04, Michal Orzel <michal.orzel@xxxxxxx> wrote:
>>
>> At the moment, we print a warning about max number of IRQs supported by
>> GIC bigger than vGIC only for hardware domain. This check is not hwdom
>> special, and should be made common. Also, in case of user not specifying
>> nr_spis for dom0less domUs, we should take into account max number of
>> IRQs supported by vGIC if it's smaller than for GIC.
>>
>> Introduce VGIC_MAX_IRQS macro and use it instead of hardcoded 992 value.
>> Fix calculation of nr_spis for dom0less domUs and make the GIC/vGIC max
>> IRQs comparison common.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@xxxxxxx>
>> ---
>> xen/arch/arm/dom0less-build.c   | 2 +-
>> xen/arch/arm/domain_build.c     | 9 ++-------
>> xen/arch/arm/gic.c              | 3 +++
>> xen/arch/arm/include/asm/vgic.h | 3 +++
>> 4 files changed, 9 insertions(+), 8 deletions(-)
>>
>> diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
>> index 31f31c38da3f..9a84fee94119 100644
>> --- a/xen/arch/arm/dom0less-build.c
>> +++ b/xen/arch/arm/dom0less-build.c
>> @@ -1018,7 +1018,7 @@ void __init create_domUs(void)
>>         {
>>             int vpl011_virq = GUEST_VPL011_SPI;
>>
>> -            d_cfg.arch.nr_spis = gic_number_lines() - 32;
>> +            d_cfg.arch.nr_spis = min(gic_number_lines(), VGIC_MAX_IRQS) - 
>> 32;
> 
> I would suggest to introduce a static inline gic_nr_spis in a gic header ...
Why GIC and not vGIC? This is domain's nr_spis, so vGIC.
But then, why static inline if the value does not change and is domain agnostic?
I struggle to find a good name for this macro. Maybe (in vgic.h):
#define vgic_def_nr_spis (min(gic_number_lines(), VGIC_MAX_IRQS) - 32)
to denote default nr_spis if not set by the user?

> 
>>
>>             /*
>>              * The VPL011 virq is GUEST_VPL011_SPI, unless direct-map is
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 7cc141ef75e9..b99c4e3a69bf 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -2371,13 +2371,8 @@ void __init create_dom0(void)
>>
>>     /* The vGIC for DOM0 is exactly emulating the hardware GIC */
>>     dom0_cfg.arch.gic_version = XEN_DOMCTL_CONFIG_GIC_NATIVE;
>> -    /*
>> -     * Xen vGIC supports a maximum of 992 interrupt lines.
>> -     * 32 are substracted to cover local IRQs.
>> -     */
>> -    dom0_cfg.arch.nr_spis = min(gic_number_lines(), (unsigned int) 992) - 
>> 32;
>> -    if ( gic_number_lines() > 992 )
>> -        printk(XENLOG_WARNING "Maximum number of vGIC IRQs exceeded.\n");
>> +    /* 32 are substracted to cover local IRQs */
>> +    dom0_cfg.arch.nr_spis = min(gic_number_lines(), VGIC_MAX_IRQS) - 32;
> 
> and reuse it here to make sure the value used is always the same.
> 
>>     dom0_cfg.arch.tee_type = tee_get_type();
>>     dom0_cfg.max_vcpus = dom0_max_vcpus();
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index acf61a4de373..e80fe0ca2421 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -251,6 +251,9 @@ void __init gic_init(void)
>>         panic("Failed to initialize the GIC drivers\n");
>>     /* Clear LR mask for cpu0 */
>>     clear_cpu_lr_mask();
>> +
>> +    if ( gic_number_lines() > VGIC_MAX_IRQS )
>> +        printk(XENLOG_WARNING "Maximum number of vGIC IRQs exceeded\n");
> 
> I am a bit unsure with this one.
> If this is the case, it means any gicv2 or gicv3 init detected an impossible 
> value and
> any usage of gic_number_lines would be using an impossible value.
Why impossible? GIC can support up to 1020 IRQs. Our vGIC can support up to 992
IRQs.

> 
> Shouldn't this somehow result in a panic as the gic detection was wrong ?
> Do you think we can continue to work safely if this value is over 
> VGIC_MAX_IRQS.
> There are other places using gic_number_lines like in irq.c.
I can add a panic, sure. This would be a change in behavior because we used this
check for hwdom unconditionally. I'd need others opinion for this one.

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.