[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HPET: mask interrupt while changing affinity


  • To: Jan Beulich <JBeulich@xxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Mon, 18 Mar 2013 12:09:24 +0000
  • Delivery-date: Mon, 18 Mar 2013 12:09:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac4j0W3xZQIKUwo/XkaulVij6P2/dA==
  • Thread-topic: [PATCH] x86/HPET: mask interrupt while changing affinity

On 18/03/2013 11:12, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> While being unable to reproduce the "No irq handler for vector ..."
> messages observed on other systems, the change done by 5dc3fd2 ('x86:
> extend diagnostics for "No irq handler for vector" messages') appears
> to point at the lack of masking - at least I can't see what else might
> be wrong with the HPET MSI code that could trigger these warnings.
> 
> While at it, also adjust the message printed by aforementioned commit
> to not pointlessly insert spaces - we don't need aligned tabular output
> here.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Acked-by: Keir Fraser <keir@xxxxxxx>

> --- a/xen/arch/x86/hpet.c
> +++ b/xen/arch/x86/hpet.c
> @@ -466,7 +466,9 @@ static void set_channel_irq_affinity(con
>  
>      ASSERT(!local_irq_is_enabled());
>      spin_lock(&desc->lock);
> +    hpet_msi_mask(desc);
>      hpet_msi_set_affinity(desc, cpumask_of(ch->cpu));
> +    hpet_msi_unmask(desc);
>      spin_unlock(&desc->lock);
>  }
>  
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -826,7 +826,7 @@ void do_IRQ(struct cpu_user_regs *regs)
>                  if ( ~irq < nr_irqs && irq_desc_initialized(desc) )
>                  {
>                      spin_lock(&desc->lock);
> -                    printk("IRQ%d a=%04lx[%04lx,%04lx] v=%02x[%02x] t=%-15s
> s=%08x\n",
> +                    printk("IRQ%d a=%04lx[%04lx,%04lx] v=%02x[%02x] t=%s
> s=%08x\n",
>                             ~irq, *cpumask_bits(desc->affinity),
>                             *cpumask_bits(desc->arch.cpu_mask),
>                             *cpumask_bits(desc->arch.old_cpu_mask),
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.