[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: [PATCH v2 1/2] x86: skip migrating IRQF_PER_CPU irq in fixup_irqs
On Fri, May 06, 2011 at 02:43:36PM +0800, Tian, Kevin wrote: > x86: skip migrating IRQF_PER_CPU irq in fixup_irqs > > IRQF_PER_CPU marks a irq binding to a specific cpu, and can never be > moved away from that cpu. So it shouldn't be migrated when fixup irqs > to offline a cpu. Xen pvops guest is one source using IRQF_PER_CPU ^- are called > on a set of virtual interrupts. Previously no error is observed ^^- was Which ones? Can you be more specific here of which type of virtual interrupts? spinlock? timer? > because Xen event chip silently fails the set_affinity ops, and > logically IRQF_PER_CPU should be recognized here. OK, so what if the set_affinity ops was implemented? > > Signed-off-by: Fengzhe Zhang <fengzhe.zhang@xxxxxxxxx> > Signed-off-by: Kevin Tian <kevin.tian@xxxxxxxxx> > CC: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > CC: Ingo Molnar <mingo@xxxxxxxxxx> > CC: H. Peter Anvin <hpa@xxxxxxxxx> > CC: Ian Campbell <Ian.Campbell@xxxxxxxxxx> > CC: Jan Beulich <JBeulich@xxxxxxxxxx> > > --- linux-2.6.39-rc6.orig/arch/x86/kernel/irq.c 2011-05-04 > 10:59:13.000000000 +0800 > +++ linux-2.6.39-rc6/arch/x86/kernel/irq.c 2011-05-06 09:20:25.563963000 > +0800 > @@ -249,7 +250,7 @@ void fixup_irqs(void) > > data = irq_desc_get_irq_data(desc); > affinity = data->affinity; > - if (!irq_has_action(irq) || > + if (!irq_has_action(irq) || irqd_is_per_cpu(data) || > cpumask_subset(affinity, cpu_online_mask)) { > raw_spin_unlock(&desc->lock); > continue; > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |