[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-ia64-devel] vIOSAPIC and IRQs delivery
Le Jeudi 09 Mars 2006 00:57, Dong, Eddie a écrit : > Alex Williamson wrote: > > On Wed, 2006-03-08 at 12:00 -0800, Magenheimer, Dan (HP Labs Fort > > > > Collins) wrote: > >>> We agree IOSAPIC must belong to Xen. And it should be able to > >>> deliver interrupts to domains and handle shared IRQs. > >> > >> Did I miss an answer to Tristan's earlier question, > >> which was approximately: How many systems out there > >> require shared IRQ's? I realize there are some huge > >> mainframe-class boxes that have hundreds of I/O cards > >> that probably do require shared IRQ's, but I wonder > >> if this class of machine will even consider using > >> Xen? Mainframe-class machines have other partitioning > >> technologies with customer-expected features that Xen > >> will never have (such as hardware fault containment). > > > > Hopefully I'm not stepping into a rat-hole here, but what are we > > defining as a shared IRQ? If we're only talking about running out of > > external interrupt vectors on the CPU and programming multiple IOSAPIC > > RTEs to trigger the same vector, I agree. That case requires are > > rather large system. Eventually we should support this, but things > > like interrupt domains may be a better long term solutions. > > Thanks! > The problem is that Xen already support it by default, the solution is > already there. > What we do is just to use it :-) Linux already support this ! > Then why IA64 want to remove this support and leave an extra effort to > future. > If it is a new one, I would say yes we should start from simple. I also agree with enabling shared interrupt. > > Then the third question: Is there a performance > > cost for shared IRQ support, even on systems that > > don't share IRQ's? Is there an additional > > performance cost for sharing an IRQ across domains? > > With each NIC card capable of generating thousands > > of interrupts/second, it doesn't seem wise to > > add additional overhead to every interrupt on > > every Xen system to support the possibility that > > someone may want to configure a system to share > > IRQs across domains. > > The extra overhead cost in shared IRQ support approach is 0. When this > IRQ > is not shared, xen just inject it into guest, no extra logic. I agree too on this point. > Event > channel > based pirq handling is much faster than vIOSAPIC based solution as we > have conducted in previous discussion. See my next mail. I'd like to be convinced. > Some basic conclusion from previous mail include: > 1: Event channel based solution has better performance than vIOSAPIC. > The detail amount is hard to say now, and should depend on benchmark > tool. Again see my next mail. > 2: Event channel based solution can support shared IRQ lines amongst > domains without any extra logic. We don't need event channel to share IRQs. > 3: Stability: > Event channel based solution has undergone long time well test and real > deployment, > but vIOSAPIC is not. BTW, IRQ is very important in OS, a race condition > issue may cost people weeks or even months debug effort. I don't agree. Current logic is much more tested than event channel, at least on ia64! Radical changes are much more worrying. > Some statement people are still arguing: > 4: Which one change linux less ? > My answer is still event channel based solution, as all event > channel code are xen common and > is already in (VBD/VNIF use it). You will have to change iosapic.c almost like my change, see io_apic-xen.c and adding new event-channel irq_type. > > 5: Which one is much transparent? > Event channel based solution only needs to do in initial time using > if running_on_xen > to choice different type hardware interrupt object (i.e. pirq_type vs, > iosapic level_irq_type > and edge_xxx). After that no extra code. > vIOSAPIC need to check runtime for every IOSAPIC MMIO access. Checking running_on_xen costs nothing. If you fear about that, forget transparent virtualization. > 6: Stability of future: > My answer is clear, fixing an intial time bug only costs one-tenth of > runtime bug. Eventchannel based solution only change intial time code. Current interrupt delivery is stable. Why changing something which is working ? Tristan. _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |