[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86: extend diagnostics for "No irq handler for vector" messages
>>> On 13.03.13 at 13:49, "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > By storing the inverted IRQ number in vector_irq[], we may be able to > spot which IRQ a vector was used for most recently, thus hopefully > permitting to understand why these messages trigger on certain systems. So this made quite obvious that it's the HPET FSB interrupts that misbehave in some way. I'll be looking into this, but can't estimate yet when exactly. I guess we can keep the patch in nevertheless, perhaps with ... > @@ -819,8 +819,22 @@ void do_IRQ(struct cpu_user_regs *regs) > if ( ! ( vector >= FIRST_LEGACY_VECTOR && > vector <= LAST_LEGACY_VECTOR && > bogus_8259A_irq(vector - FIRST_LEGACY_VECTOR) ) ) > + { > printk("CPU%u: No irq handler for vector %02x (IRQ %d%s)\n", > smp_processor_id(), vector, irq, kind); > + desc = irq_to_desc(~irq); > + if ( ~irq < nr_irqs && irq_desc_initialized(desc) ) > + { > + spin_lock(&desc->lock); > + printk("IRQ%d a=%04lx[%04lx,%04lx] v=%02x[%02x] t=%-15s > s=%08x\n", ... the %-15s replaced by simply %s (as the output here isn't in tabular form, yet I copied the stuff from dump_irqs() without paying attention to this aspect. Jan > + ~irq, *cpumask_bits(desc->affinity), > + *cpumask_bits(desc->arch.cpu_mask), > + *cpumask_bits(desc->arch.old_cpu_mask), > + desc->arch.vector, desc->arch.old_vector, > + desc->handler->typename, desc->status); > + spin_unlock(&desc->lock); > + } > + } > TRACE_1D(TRC_HW_IRQ_UNMAPPED_VECTOR, vector); > } > goto out_no_unlock; _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |