[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/apic/x2apic: Share IRQ vector between cluster members only when no cpumask is specified



On 07/08/17 09:18, Jan Beulich wrote:
>>>> Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> 07/31/17 10:03 PM >>>
>> We have limited number (slightly under NR_DYNAMIC_VECTORS=192) of IRQ
>> vectors that are available to each processor. Currently, when x2apic
>> cluster mode is used (which is default), each vector is shared among
>> all processors in the cluster. With many IRQs (as is the case on systems
>> with multiple SR-IOV cards) and few clusters (e.g. single socket)
>> there is a good chance that we will run out of vectors.
>>
>> This patch tries to decrease vector sharing between processors by
>> assigning vector to a single processor if the assignment request (via
>> __assign_irq_vector()) comes without explicitly specifying which
>> processors are expected to share the interrupt. This typically happens
>> during boot time (or possibly PCI hotplug) when create_irq(NUMA_NO_NODE)
>> is called. When __assign_irq_vector() is called from
>> set_desc_affinity() which provides sharing mask, vector sharing will
>> continue to be performed, as before.
> Wouldn't it be sufficient for people running into vector shortage due to
> sharing to specify "x2apic_phys" on the command line?

Had XenServer noticed this change in default earlier, I would have
insisted that you make x2apic_phys the default.

On affected systems, problem manifests as an upgrade where the new
hypervisor doesn't boot.  Recovering from this requires human
intervention during the server boot.

As it was, the change went unnoticed for several releases upstream, so
I'm not sure if changing the default at this point is a sensible course
of action.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.