|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen-devel Digest, Vol 109, Issue 583
On Wed, 26 Mar 2014, Vijay Kilari wrote:
> > Date: Mon, 24 Mar 2014 18:49:29 +0000
> > From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
> > Cc: julien.grall@xxxxxxxxxx, Ian.Campbell@xxxxxxxxxx, Stefano
> > Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> > Subject: [Xen-devel] [PATCH v5 04/10] xen/arm: support HW interrupts,
> > do not request maintenance_interrupts
> > Message-ID:
> > <1395686975-12649-4-git-send-email-stefano.stabellini@xxxxxxxxxxxxx>
> > Content-Type: text/plain
> >
> > If the irq to be injected is an hardware irq (p->desc != NULL), set
> > GICH_LR_HW. Do not set GICH_LR_MAINTENANCE_IRQ.
> >
> > Remove the code to EOI a physical interrupt on behalf of the guest
> > because it has become unnecessary.
> >
> > Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
> > registers, clear the invalid ones and free the corresponding interrupts
> > from the inflight queue if appropriate. Add the interrupt to lr_pending
> > if the GIC_IRQ_GUEST_PENDING is still set.
> >
> > Call gic_clear_lrs on entry to the hypervisor to make sure that the
> > calculation in Xen of the highest priority interrupt currently inflight
> > is correct and accurate and not based on stale data.
> >
> > In vgic_vcpu_inject_irq, if the target is a vcpu running on another
> > pcpu, we are already sending an SGI to the other pcpu so that it would
> > pick up the new IRQ to inject. Now also send an SGI to the other pcpu
> > even if the IRQ is already inflight, so that it can clear the LR
> > corresponding to the previous injection as well as injecting the new
> > interrupt.
> >
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
> >
> > +static void gic_clear_one_lr(struct vcpu *v, int i)
> > +{
> > + struct pending_irq *p;
> > + uint32_t lr;
> > + int irq;
> > + bool_t inflight;
> > +
> > + ASSERT(!local_irq_is_enabled());
> > + ASSERT(spin_is_locked(&v->arch.vgic.lock));
> > +
> > + lr = GICH[GICH_LR + i];
> > + if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
> > + {
> > + inflight = 0;
> > + GICH[GICH_LR + i] = 0;
> > + clear_bit(i, &this_cpu(lr_mask));
> > +
> > + irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> > + spin_lock(&gic.lock);
> > + p = irq_to_pending(v, irq);
> > + if ( p->desc != NULL )
> > + p->desc->status &= ~IRQ_INPROGRESS;
> > + clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > + if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> > + test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> > + {
> > + inflight = 1;
> > + gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
> > + }
> > + spin_unlock(&gic.lock);
> > + if ( !inflight )
> > + {
> > + spin_lock(&v->arch.vgic.lock);
>
> In this condition, are you not tyring to lock vgic.lock second time?
> Once in caller of this function gic_clear_lrs()?
You are right. This bug was introduced with the refactoring in v5.
I'll fix and resend.
> > + list_del_init(&p->inflight);
> > + spin_unlock(&v->arch.vgic.lock);
> > + }
> > + }
> > +}
> > +
> > +void gic_clear_lrs(struct vcpu *v)
> > +{
> > + int i = 0;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&v->arch.vgic.lock, flags);
> > +
> > + while ((i = find_next_bit((const unsigned long *) &this_cpu(lr_mask),
> > + nr_lrs, i)) < nr_lrs) {
> > + gic_clear_one_lr(v, i);
> > + i++;
> > + }
> > +
> > + spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> > +}
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |