[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3] arm: support fewer LR registers than virtual irqs



On Fri, 17 Feb 2012, Ian Campbell wrote:
> On Wed, 2012-02-15 at 14:03 +0000, Stefano Stabellini wrote:
> > If the vgic needs to inject a virtual irq into the guest, but no free
> > LR registers are available, add the irq to a list and return.
> > Whenever an LR register becomes available we add the queued irq to it
> > and remove it from the list.
> > We use the gic lock to protect the list and the bitmask.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > ---
> >  xen/arch/arm/gic.c           |   61 
> > ++++++++++++++++++++++++++++++++++++++---
> >  xen/include/asm-arm/domain.h |    1 +
> >  2 files changed, 57 insertions(+), 5 deletions(-)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index adc10bb..520a400 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -25,6 +25,7 @@
> >  #include <xen/sched.h>
> >  #include <xen/errno.h>
> >  #include <xen/softirq.h>
> > +#include <xen/list.h>
> >  #include <asm/p2m.h>
> >  #include <asm/domain.h>
> >  
> > @@ -45,6 +46,8 @@ static struct {
> >      unsigned int lines;
> >      unsigned int cpus;
> >      spinlock_t lock;
> > +    uint64_t lr_mask;
> 
> Is there somewhere in the code which clamps the number of LRs reported
> by the h/w to 64? (and warns appropriately)

The hardware spec :)
64 is the maximum number of LR registers.


> This is a mask of in-use LRs, right?

Yes


> > +    struct list_head lr_pending;
> >  } gic;
> >  
> >  irq_desc_t irq_desc[NR_IRQS];
> > @@ -247,6 +250,8 @@ static void __cpuinit gic_hyp_init(void)
> >  
> >      GICH[GICH_HCR] = GICH_HCR_EN;
> >      GICH[GICH_MISR] = GICH_MISR_EOI;
> > +    gic.lr_mask = 0ULL;
> > +    INIT_LIST_HEAD(&gic.lr_pending);
> >  }
> >  
> >  /* Set up the GIC */
> > @@ -345,16 +350,51 @@ int __init setup_irq(unsigned int irq, struct 
> > irqaction *new)
> >      return rc;
> >  }
> >  
> > -void gic_set_guest_irq(unsigned int virtual_irq,
> > +static inline void gic_set_lr(int lr, unsigned int virtual_irq,
> >          unsigned int state, unsigned int priority)
> >  {
> > -    BUG_ON(virtual_irq > nr_lrs);
> > -    GICH[GICH_LR + virtual_irq] = state |
> > +    BUG_ON(lr > nr_lrs);
> > +    GICH[GICH_LR + lr] = state |
> >          GICH_LR_MAINTENANCE_IRQ |
> >          ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> >          ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> >  }
> >  
> > +void gic_set_guest_irq(unsigned int virtual_irq,
> > +        unsigned int state, unsigned int priority)
> > +{
> > +    int i;
> > +    struct pending_irq *iter, *n;
> > +
> > +    spin_lock(&gic.lock);
> > +
> > +    if ( list_empty(&gic.lr_pending) )
> > +    {
> > +        for (i = 0; i < nr_lrs; i++) {
> 
> One of the bitops.h functions should be usable avoid this loop. We don't
> seem to have a find-and-set type operation so you still need the lock.
> 
> Do we have any per-LR state which we keep? If we did then it be worth
> chaining them into a free list, which you could cheaply enqueue and
> dequeue entries from.
> 
> > +            if (!test_and_set_bit(i, &gic.lr_mask))
> 
> test_and_set_bit is atomic which is unnecessary since you are also
> protecting this field with a lock, or if you use the find_first_zero
> idea you could just use __set_bit.

good idea


> > +            {
> > +                gic_set_lr(i, virtual_irq, state, priority);
> > +                spin_unlock(&gic.lock);
> > +                return;
> > +            }
> > +        }
> > +    }
> > +
> > +    n = irq_to_pending(current, virtual_irq);
> > +    list_for_each_entry ( iter, &gic.lr_pending, lr_link )
> > +    {
> > +        if ( iter->priority < priority )
> > +        {
> > +            list_add_tail(&n->lr_link, &iter->lr_link);
> > +            spin_unlock(&gic.lock);
> > +            return;
> > +        }
> > +    }
> > +    list_add(&n->lr_link, &gic.lr_pending);
> > +    spin_unlock(&gic.lock);
> 
> I'm not sure I follow the logic here -- it seems that only one IRQ will
> ever be pending in the LRs at a time, but if we've got 4 LRs wouldn't we
> want to have 4 active at once and only trap every 4 Acks?
> 
> If there's only going to be one active LR then we can jut always use LR0
> and do away with the bitmap for the time being.

We have two lists: one list, inflight_irqs, is kept by the vgic to keep
track of which irqs the vgic has injected and have not been eoi'd by the
guest yet.

The second list, gic.lr_pending, is a list of irqs that from the vgic
POV have been already injected (they have been added to
inflight_irqs already) but because of hardware limitations
(limited availability of LR registers) the gic hasn't been able to
inject them yet.

Here we are going through the second list, gic.lr_pending, that is
ordered by priority, and we are inserting this last guest irq in the
right spot so that as soon as an LR register is available we can
actually inject it into the guest.

The vgic is not actually aware that the gic hasn't managed to inject the
irq yet.



> Are interrupts only on lr_pending if they are not active in an LR?

Yes


> Are
> pending irqs which are in an lr kept in a list somewhere else?

They are in inflight_irqs until eoi'd by the guest.


> Also it would be nice to have a single exit and unlock path.

OK


> > +    return;
> > +}
> > +
> >  void gic_inject_irq_start(void)
> >  {
> >      uint32_t hcr;
> > @@ -435,13 +475,25 @@ static void maintenance_interrupt(int irq, void 
> > *dev_id, struct cpu_user_regs *r
> >      uint32_t lr;
> >      uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 
> > 32);
> >  
> > -    for ( i = 0; i < 64; i++ ) {
> > +    for ( i = 0; i < nr_lrs; i++ ) {
> >          if ( eisr & ((uint64_t)1 << i) ) {
> 
> This loop and test could also use a call to whichever bitops.h thing is
> appropriate.
> 
> Maybe doesn't matter for loops of the order of 64 iterations but would
> be a nice cleanup to make.

OK.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.