[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v7 02/12] xen/arm: gic-v2: Implement GIC suspend/resume functions



Hi Julien,

Thanks for the review.

On Fri, Dec 26, 2025 at 2:29 PM Julien Grall <julien@xxxxxxx> wrote:
>
> Hi Mykola,
>
> On 11/12/2025 18:43, Mykola Kvach wrote:
> > diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> > index b23e72a3d0..0b2f7b3862 100644
> > --- a/xen/arch/arm/gic-v2.c
> > +++ b/xen/arch/arm/gic-v2.c
> > @@ -1098,6 +1098,123 @@ static int gicv2_iomem_deny_access(struct domain *d)
> >       return iomem_deny_access(d, mfn, mfn + nr);
> >   }
> >
> > +#ifdef CONFIG_SYSTEM_SUSPEND
> > +
> > +/* This struct represent block of 32 IRQs */
> > +struct irq_block {
> > +    uint32_t icfgr[2]; /* 2 registers of 16 IRQs each */
> > +    uint32_t ipriorityr[8];
> > +    uint32_t isenabler;
> > +    uint32_t isactiver;
> > +    uint32_t itargetsr[8];
> > +};
> > +
> > +/* GICv2 registers to be saved/restored on system suspend/resume */
> > +struct gicv2_context {
> > +    /* GICC context */
>  > +    struct cpu_ctx {> +        uint32_t ctlr;
> > +        uint32_t pmr;
> > +        uint32_t bpr;
> > +    } cpu;
> > +
> > +    /* GICD context */
> > +    struct dist_ctx {
> > +        uint32_t ctlr;
> > +        struct irq_block *irqs;
>
> To confirm my understanding, this will also include the PPIs, SGIs for
> the boot CPU, am I correct? If so, I would suggest to add a comment.

Yes, correct. I’ll add a comment to make it explicit that this includes
SGIs/PPIs for the boot CPU.

>
> > +    } dist;
> > +};
> > +
> > +static struct gicv2_context gic_ctx;
> > +
> > +static int gicv2_suspend(void)
> > +{
> > +    unsigned int i, blocks = DIV_ROUND_UP(gicv2_info.nr_lines, 32);
> > +
> > +    /* Save GICC configuration */
> > +    gic_ctx.cpu.ctlr = readl_gicc(GICC_CTLR);
> > +    gic_ctx.cpu.pmr = readl_gicc(GICC_PMR);
> > +    gic_ctx.cpu.bpr = readl_gicc(GICC_BPR);
> > +
> > +    /* Save GICD configuration */
> > +    gic_ctx.dist.ctlr = readl_gicd(GICD_CTLR);
>
> Do we want to disable the GIC distributor and CPU interface on suspend?

I think we should quiesce the CPU interface after saving state,
but not disable the distributor globally.

I still prefer not to disable GICD globally for safety on platforms
where the wake request is routed from the GIC to the PMU/SCP (e.g. via
nIRQOUT/nFIQOUT). So I’d quiesce GICC, keep GICD enabled.

Are you OK with this approach?

>
> > +
> > +    for ( i = 0; i < blocks; i++ )
> > +    {
> > +        struct irq_block *irqs = gic_ctx.dist.irqs + i;
> > +        size_t j, off = i * sizeof(irqs->isenabler);
> > +
> > +        irqs->isenabler = readl_gicd(GICD_ISENABLER + off);
> > +        irqs->isactiver = readl_gicd(GICD_ISACTIVER + off);
> > +
> > +        off = i * sizeof(irqs->ipriorityr);
> > +        for ( j = 0; j < ARRAY_SIZE(irqs->ipriorityr); j++ )
> > +        {
> > +            irqs->ipriorityr[j] = readl_gicd(GICD_IPRIORITYR + off + j * 
> > 4);
> > +            irqs->itargetsr[j] = readl_gicd(GICD_ITARGETSR + off + j * 4);
> > +        }
> > +
> > +        off = i * sizeof(irqs->icfgr);
> > +        for ( j = 0; j < ARRAY_SIZE(irqs->icfgr); j++ )
> > +            irqs->icfgr[j] = readl_gicd(GICD_ICFGR + off + j * 4);
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static void gicv2_resume(void)
> > +{
> > +    unsigned int i, blocks = DIV_ROUND_UP(gicv2_info.nr_lines, 32);
> > +
> > +    gicv2_cpu_disable();
>  > +    /* Disable distributor */> +    writel_gicd(0, GICD_CTLR);
>  > +> +    for ( i = 0; i < blocks; i++ )
> > +    {
> > +        struct irq_block *irqs = gic_ctx.dist.irqs + i;
> > +        size_t j, off = i * sizeof(irqs->isenabler);
> > +
> > +        writel_gicd(0xffffffffU, GICD_ICENABLER + off);
>
> NIT: Can we use GENMASK? This will make easier to confirm we have the
> correct number of bits.

Sure, I'll change it to GENMASK

Best regards,
Mykola


>
> [...]
>
> Cheers,
>
> --
> Julien Grall
>



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.