[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Crash in set_cpu_sibling_map() booting Xen 4.6.0 on Fusion
>>> On 24.11.15 at 15:13, <eswierk@xxxxxxxxxxxxxxxxxx> wrote: > On Tue, Nov 24, 2015 at 2:34 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote: >> Bottom line - for the moment I do not see a reasonable way of >> dealing with that situation. The closest I could see would be what >> we iirc had temporarily during the review cycles of the initial CAT >> series: A command line option to specify the number of sockets. Or >> make all accesses to socket_cpumask[] conditional upon PSR being >> enabled (which would have the bad side effect of making future >> uses for other purposes more cumbersome), or go through and >> range check the socket number on all of those accesses. > > Could we avoid the issue by replacing socket_cpumask array with a list > or hashtable, indexed by socket ID? Yes, a radix tree would work. But it would also seem like overkill if all we need it for is some strange virtualization of CPUID. The more I think about it, the better I like the option below. Jan >> Chao, could you - inside Intel - please check whether there are >> any assumptions on the respective CPUID leaf output that aren't >> explicitly stated in the SDM right now (like resulting in contiguous >> socket numbers), and ask for them getting made explicit (if there >> are any), or it being made explicit that no assumptions at all are >> to be made at all on the presented values (in which case we'd >> have to consume MADT parsing data in set_nr_sockets(), e.g. >> by replacing num_processors there with one more than the >> maximum APIC ID of any non-disabled CPU)? > > I suppose the key is whether Intel has encoded such assumptions in the > BIOS reference code, or has otherwise communicated them to AMI et al. > > --Ed _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |