[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/apic/x2apic: Share IRQ vector between cluster members only when no cpumask is specified

On 07/08/17 16:38, Jan Beulich wrote:
>>>> Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> 08/07/17 5:12 PM >>>
>> On 08/07/2017 10:52 AM, Jan Beulich wrote:
>>>>>> Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 08/07/17 4:39 PM >>>
>>>> On 07/08/17 09:18, Jan Beulich wrote:
>>>>> Wouldn't it be sufficient for people running into vector shortage due to
>>>>> sharing to specify "x2apic_phys" on the command line?
>>>> Had XenServer noticed this change in default earlier, I would have
>>>> insisted that you make x2apic_phys the default.
>>> Change in default? About five and a half years ago I did implement cluster
>>> mode properly, but I wasn't able to spot a commit that changed the
>>> default from physical to cluster.
>>>> On affected systems, problem manifests as an upgrade where the new
>>>> hypervisor doesn't boot.  Recovering from this requires human
>>>> intervention during the server boot.
>>> This needs some more explanation: Why would the hypervisor not boot?
>>> Even if you said Dom0 didn't boot, it wouldn't really be clear to me why,
>>> as running out of vectors would likely occur only after critical devices
>>> had their drivers loaded.
>> If your critical device is a networking card with lots of functions? In
>> my case I had two cards. The first one was plumbed properly but the
>> second failed. It so happened that only the first one was required to
>> boot but I can see that in another configuration an init script might
>> require both to be initialized.
> So a single NIC drove the system out of vectors? That's insane, I would
> say, i.e. I'd call this a misconfigured system. But yeah, if we really want
> to get such a thing to work despite the insanity ...

No.  That's one single (dual headed) card with 128 virtual functions each.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.