[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 03/10] xen/arm: support HW interrupts, do not request maintenance_interrupts



On Wed, 19 Mar 2014, Julien Grall wrote:
> Hi Stefano,
> 
> On 03/19/2014 12:31 PM, Stefano Stabellini wrote:
> 
> [..]
> 
> > @@ -625,16 +628,19 @@ int __init setup_dt_irq(const struct dt_irq *irq, 
> > struct irqaction *new)
> >  static inline void gic_set_lr(int lr, struct pending_irq *p,
> >          unsigned int state)
> >  {
> > -    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
> > +    uint32_t lr_reg;
> >  
> >      BUG_ON(lr >= nr_lrs);
> >      BUG_ON(lr < 0);
> >      BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
> >  
> > -    GICH[GICH_LR + lr] = state |
> > -        maintenance_int |
> > -        ((p->priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> > +    lr_reg = state | ((p->priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> >          ((p->irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> > +    if ( p->desc != NULL )
> > +        lr_reg |= GICH_LR_HW |
> > +            ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << 
> > GICH_LR_PHYSICAL_SHIFT);
> > +
> > +    GICH[GICH_LR + lr] = lr_reg;
> >  
> >      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> >      clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
> > @@ -669,7 +675,7 @@ void gic_remove_from_queues(struct vcpu *v, unsigned 
> > int virtual_irq)
> >      spin_unlock_irqrestore(&gic.lock, flags);
> >  }
> >  
> > -void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> > +void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
> 
> Any reason to rename virtual_irq into irq?

Just that with this patch we also use this function to inject hw irqs.


> [..]
> 
> > +static void gic_clear_lrs(struct vcpu *v)
> > +{
> > +    struct pending_irq *p;
> > +    int i = 0, irq;
> > +    uint32_t lr;
> > +    bool_t inflight;
> > +
> > +    ASSERT(!local_irq_is_enabled());
> > +
> > +    while ((i = find_next_bit((const long unsigned int *) 
> > &this_cpu(lr_mask),
> > +                              nr_lrs, i)) < nr_lrs) {
> > +        lr = GICH[GICH_LR + i];
> > +        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
> > +        {
> > +            inflight = 0;
> > +            GICH[GICH_LR + i] = 0;
> > +            clear_bit(i, &this_cpu(lr_mask));
> > +
> > +            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> > +            spin_lock(&gic.lock);
> 
> Not completely related to this patch ... taking gic.lock seems a bit too
> strong here. The critical section only update data for the current
> domain. It seems a bit stupid to block the other interrupt to handle
> their interrupts at the same time.
> 
> Maybe introducing a dist lock would be a better solution?

It gets removed by a later patch: "don't protect GICH and lr_queue
accesses with gic.lock".


> [..]
> 
> >  void gic_dump_info(struct vcpu *v)
> > diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> > index aab490c..566f0ff 100644
> > --- a/xen/arch/arm/vgic.c
> > +++ b/xen/arch/arm/vgic.c
> > @@ -701,8 +701,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int 
> > irq)
> >          if ( (irq != current->domain->arch.evtchn_irq) ||
> >               (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
> >              set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
> > -        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > -        return;
> > +        goto out;
> 
> We don't want to kick the other VCPU every time. I think it's enough
> when the interrupt is updated.

Without maintenance interrupts, the other vcpu potentially might never
return. We need to send an SGI to it to make sure it gets interrupted.
Once it is interrupted, it is going to run gic_clear_lrs that clears the
old LR and inject the new interrupt.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.