[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: Xen spinlock questions
Jan Beulich wrote: Jeremy, considering to utilize your pv-ops spinlock implementation for our kernels, I'd appreciate your opinion on the following thoughts: 1) While the goal of the per-CPU kicker irq appears to be to avoid all CPUs waiting for a particular lock to get kicked simultaneously, I think this doesn't have the desired effect. This is because Xen doesn't track what event channel you poll for (through SCHEDOP_poll), and rather kicks all CPUs polling for any event channel. There's no problem with kicking all cpus waiting for a given lock, but it was intended to avoid kicking cpus waiting for some other lock. I hadn't looked at the poll implementation that closely. I guess using the per-cpu interrupt gives Xen some room to live up to the expectations we have for it ;) 2) While on native not re-enabling interrupts in __raw_spin_lock_flags() may be tolerable (but perhaps questionable), not doing so at least on the slow path here seems suspicious. I wasn't sure about that. Is it OK to enable interrupts in the middle of a spinlock? Can it be done unconditionally? 3) Introducing yet another per-CPU IRQ for this purpose further tightens scalability. Using a single, IRQF_PER_CPU IRQ should be sufficient here, as long as it gets properly multiplexed onto individual event channels (of which we have far more than IRQs). I have a patch queued for the traditional tree that does just that conversion for the reschedule and call-function IPIs, which I had long planned to get submitted (but so far wasn't able to due to lack of testing done on the migration aspects of it), and once successful was planning on trying to do something similar for the timer IRQ. There's two lines of work I'm hoping to push to mitigate this:One is the unification of 32 and 64-bit interrupt handling, so that they both have an underlying notion of a vector, which is what we map event channels to. Since vectors can be mapped to a (irq,cpu) tuple, it would allow multiple per-cpu vectors/event channels to be mapped to a single irq, and do so generically for all event channel types. That would mean we'd end up allocating one set of interrupts for time, function calls, spinlocks, etc, rather than percpu. The other is eliminating NR_IRQ, and making irq allocation completely dynamic. I am attaching that (2.6.26 based) patch just for reference. From a quick look, you're thinking along similar lines. J _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |