[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: Question about x86/mm/gup.c's use of disabled interrupts
Avi Kivity wrote: And the hypercall could result in no Xen-level IPIs at all, so it could be very quick by comparison to an IPI-based Linux implementation, in which case the flag polling would be particularly harsh.Maybe we could bring these optimizations into Linux as well. The only thing Xen knows that Linux doesn't is if a vcpu is not scheduled; all other information is shared. I don't think there's a guarantee that just because a vcpu isn't running now, it won't need a tlb flush. If a pcpu does runs vcpu 1 -> idle -> vcpu 1, then there's no need for it to do a tlb flush, but the hypercall can make force a flush when it reschedules vcpu 1 (if the tlb hasn't already been flushed by some other means). (I'm not sure to what extent Xen implements this now, but I wouldn't want to over-constrain it.) Also, the straightforward implementation of "poll until all target cpu's flags are clear" may never make progress, so you'd have to "scan flags, remove busy cpus from set, repeat until all cpus done".All annoying because this race is pretty unlikely, and it seems a shame to slow down all tlb flushes to deal with it. Some kind of global "doing gup_fast" counter would get flush_tlb_others bypass the check, at the cost of putting a couple of atomic ops around the outside of gup_fast.The nice thing about local_irq_disable() is that it scales so well. Right. But it effectively puts the burden on the tlb-flusher to check the state (implicitly, by trying to send an interrupt). Putting an explicit poll in gets the same effect, but its pure overhead just to deal with the gup race. I'll put a patch together and see how it looks. J _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |