[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: clean up __io_apic_eoi()




On 14/11/11 09:08, Jan Beulich wrote:
>>>> On 11.11.11 at 17:49, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 11/11/11 16:07, Jan Beulich wrote:
>>> Irrespective of the IO-APIC vector sharing suppression patch just sent
>>> the logic in this function needs to iterate over all RTEs, since
>>> multiple pins within an IO-APIC may still use the same vector.
>> Why?  The whole point of preventing vector sharing for IO-APICs is to
>> prevent two or more RTEs referencing the same vector.
> If that was really the case on *all* systems, then we wouldn't need
> the chains of IRQs hanging off irq_2_pin[] entries. Obviously there are
> or have been or could theoretically be systems that do make use of this.
>
> BUT again after some more thinking about this over the weekend
> (and after fixing the issue pointed out in the other response regarding
> the other patch) I think we can actually convert the function to
> behave the way you intended it to after dealing with the vector
> sharing issue: The call sites are then only __eoi_IO_APIC_irq() (which
> already traverses the chain from irq_2_pin[]) and clear_IO_APIC_pin()
> (which explicitly wants to deal with just a single (apic, pin) tuple, the
> uses in the timer interrupt related boot time code having been bogus in
> this respect even before your original change to the EOI logic, as they
> imply that no other (apic, pin) tuple also represents the timer IRQ).
>
> Jan
>

Ah yes.  I was silly and had not considered that possibility. 
Realistically, I doubt there are many boxes still around which have
shared ISA interrupts, but we should deal with the case.

-- 
Andrew Cooper - Dom0 Kernel Engineer, Citrix XenServer
T: +44 (0)1223 225 900, http://www.citrix.com


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.