[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 19/36] ARM: VGIC: add vcpu_id to struct pending_irq
Hi Stefano, On 04/07/2017 11:14 PM, Stefano Stabellini wrote: On Fri, 7 Apr 2017, Julien Grall wrote:Hi Andre, On 04/07/2017 06:32 PM, Andre Przywara wrote:The target CPU for an LPI is encoded in the interrupt translation table entry, so can't be easily derived from just an LPI number (short of walking *all* tables and find the matching LPI). To avoid this in case we need to know the VCPU (for the INVALL command, for instance), put the VCPU ID in the struct pending_irq, so that it is easily accessible. Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx> --- xen/include/asm-arm/vgic.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/xen/include/asm-arm/vgic.h b/xen/include/asm-arm/vgic.h index 86a1a89..9145b42 100644 --- a/xen/include/asm-arm/vgic.h +++ b/xen/include/asm-arm/vgic.h @@ -88,6 +88,9 @@ struct pending_irq * TODO: when implementing irq migration, taking only the current * vgic lock is not going to be enough. */ struct list_head lr_queue; +#ifdef CONFIG_HAS_ITS + uint16_t lpi_vcpu_id; /* The VCPU for an LPI. */ +#endifI am sorry, but in the current state this is a nack from me side. The pending_irq structure will increase by 8 bytes because of the alignment. This will affect all the irq_pending as soon as CONFIG_HAS_ITS is enabled even if the platform is not using ITS. Looking at the current usage it is only used by INVALL, which lead to confirm my first nack. Please either find a hole in pending_irq or find an alternative solution.Yeah, I was thinking the same thing. That is why I suggested the #ifdef CONFIG_HAS_ITS. At least the problem would remain confined to ITS. I recognize that we need a better solution, but I would be willing to take this as-is with the ifdef for 4.9, assuming that everything else is in order. Still, it is 8 bytes per IRQ. Which means increase internal memory usage by up to 8KB (assuming DOM0 with 1024 interrupts). Xen currently support only up to 128 vCPUs and there is a 1 byte hole in the structure. As it is plenty enough, why don't we use this hole? Cheers, -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |