[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.5] xen/arm: Fix virtual timer on ARMv8 Model



On Thu, 27 Nov 2014, Ian Campbell wrote:
> On Tue, 2014-11-25 at 17:44 +0000, Julien Grall wrote:
> > ARMv8 model may not disable correctly the timer interrupt when Xen
> 
> "correct disable"
> 
> > context switch to an idle vCPU. Therefore Xen may receive a spurious
> 
> "context switches" and s/spurious/unexpected/ (since spurious has a
> specific meaning in the h/w which does not match what is happening here)
> 
> > timer interrupt. As the idle domain doesn't have vGIC, Xen will crash
> > when trying to inject the interrupt with the following stack trace.
> > 
> > (XEN)    [<0000000000228388>] _spin_lock_irqsave+0x28/0x94 (PC)
> > (XEN)    [<0000000000228380>] _spin_lock_irqsave+0x20/0x94 (LR)
> > (XEN)    [<0000000000250510>] vgic_vcpu_inject_irq+0x40/0x1b0
> > (XEN)    [<000000000024bcd0>] vtimer_interrupt+0x4c/0x54
> > (XEN)    [<0000000000247010>] do_IRQ+0x1a4/0x220
> > (XEN)    [<0000000000244864>] gic_interrupt+0x50/0xec
> > (XEN)    [<000000000024fbac>] do_trap_irq+0x20/0x2c
> > (XEN)    [<0000000000255240>] hyp_irq+0x5c/0x60
> > (XEN)    [<0000000000241084>] context_switch+0xb8/0xc4
> > (XEN)    [<000000000022482c>] schedule+0x684/0x6d0
> > (XEN)    [<000000000022785c>] __do_softirq+0xcc/0xe8
> > (XEN)    [<00000000002278d4>] do_softirq+0x14/0x1c
> > (XEN)    [<0000000000240fac>] idle_loop+0x134/0x154
> > (XEN)    [<000000000024c160>] start_secondary+0x14c/0x15c
> > (XEN)    [<0000000000000001>] 0000000000000001
> > 
> > While we receive spurious virtual timer interrupt, this could be safely
> > ignore for the time being. A proper fix need to be found for Xen 4.6.
> > 
> > Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
> 
> Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
> 
> Although I wonder if we should log, perhaps rate limited or only once.
> 
> Also, I've some grammar nits (above and below) which I can fix on commit
> if there is no resend...
> 
> > 
> > ---
> > 
> > This patch is a bug fix candidate for Xen 4.5. Any ARMv8 model may
> > randomly crash when running Xen.
> 
> CCing Konrad.
> 
> > This patch don't inject the virtual timer interrupt if the current VCPU
> > is the idle one. Entering in this function with the idle VCPU is already
> > a bug itself. For now, I think this patch is the safest way to resolve
> > the problem.
> > 
> > Meanwhile, I'm investigating with ARM to see wheter the bug comes from
> > Xen or the model.

It is worth noting that there are no bad side effects of this change:
the vtimer_interrupt is always supposed to be received on non-idle
domains. As Julien wrote, the fact that we are receiving a
vtimer_interrupt in the idle_domain is a bug, one that probably comes
from the ARM model not emulating hardware correctly.


> >  xen/arch/arm/time.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> > 
> > diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
> > index a6436f1..83c74cb 100644
> > --- a/xen/arch/arm/time.c
> > +++ b/xen/arch/arm/time.c
> > @@ -169,6 +169,14 @@ static void timer_interrupt(int irq, void *dev_id, 
> > struct cpu_user_regs *regs)
> >  
> >  static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs 
> > *regs)
> >  {
> > +    /*
> > +     * ARMv8 model may not disable correctly the timer interrupt when
> 
> "correctly disable"
> 
> > +     * Xen context switch to an idle vCPU. Therefore Xen may receive
> 
> "context switches" and "may receive an unexpected timer interrupt"
> 
> > +     * timer interrupt.
> > +     */
> > +    if ( is_idle_vcpu(current) )
> > +        return;
> > +
> >      current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
> >      WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, 
> > CNTV_CTL_EL0);
> >      vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.