[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/apic/x2apic: Share IRQ vector between cluster members only when no cpumask is specified

On 07/08/17 17:49, Boris Ostrovsky wrote:
> On 08/07/2017 12:20 PM, Andrew Cooper wrote:
>> On 07/08/17 17:06, Jan Beulich wrote:
>>>>>> Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 08/07/17 5:40 PM >>>
>>>> On 07/08/17 16:38, Jan Beulich wrote:
>>>>> So a single NIC drove the system out of vectors? That's insane, I would
>>>>> say, i.e. I'd call this a misconfigured system. But yeah, if we really 
>>>>> want
>>>>> to get such a thing to work despite the insanity ...
>>>> No.  That's one single (dual headed) card with 128 virtual functions each.
>>> But putting such in a single-core system is, well, not very reasonable. At
>>> the very least I'd expect the admin to limit the number of VFs then, or not
>>> load a driver in Dom0 for them (but only for the PF).
>> Its not a single core system, but cluster mode allocates vectors across
>> clusters, not cores. This turns even the largest server system into a
>> single core system as far as vector availability goes.
> In my case we have enough vectors on a fully-populated system (4
> sockets), which is how it was installed.
> However, we want to be able to reduce the number of processors and one
> such configuration is to limit processors to a single socket (which
> translates to a single cluster as far APIC is concerned). Requiring
> admins to also reduce number of virtual functions is probably somewhat
> complicated to explain/document (especially since this would depend on
> number of cores on a socket).

Netscalars usecase is a dual-socket system, but it already having to
under-utilise the interrupt capability of the hardware to be able to fit
a single interrupt from every device into the vector-space available in Xen.

More than once, we've discussed freeing up the legacy PIC range of
vectors, and freeing up the unallocated vectors in 0xfx.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.