[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH] x86/HPET: mask interrupt while changing affinity
While being unable to reproduce the "No irq handler for vector ..." messages observed on other systems, the change done by 5dc3fd2 ('x86: extend diagnostics for "No irq handler for vector" messages') appears to point at the lack of masking - at least I can't see what else might be wrong with the HPET MSI code that could trigger these warnings. While at it, also adjust the message printed by aforementioned commit to not pointlessly insert spaces - we don't need aligned tabular output here. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- a/xen/arch/x86/hpet.c +++ b/xen/arch/x86/hpet.c @@ -466,7 +466,9 @@ static void set_channel_irq_affinity(con ASSERT(!local_irq_is_enabled()); spin_lock(&desc->lock); + hpet_msi_mask(desc); hpet_msi_set_affinity(desc, cpumask_of(ch->cpu)); + hpet_msi_unmask(desc); spin_unlock(&desc->lock); } --- a/xen/arch/x86/irq.c +++ b/xen/arch/x86/irq.c @@ -826,7 +826,7 @@ void do_IRQ(struct cpu_user_regs *regs) if ( ~irq < nr_irqs && irq_desc_initialized(desc) ) { spin_lock(&desc->lock); - printk("IRQ%d a=%04lx[%04lx,%04lx] v=%02x[%02x] t=%-15s s=%08x\n", + printk("IRQ%d a=%04lx[%04lx,%04lx] v=%02x[%02x] t=%s s=%08x\n", ~irq, *cpumask_bits(desc->affinity), *cpumask_bits(desc->arch.cpu_mask), *cpumask_bits(desc->arch.old_cpu_mask), Attachment:
x86-HPET-affinity-masked.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |