[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest
. snip.. > So, it looks to me that: > 1) any application using CPUID for either licensing or > placement/performance optimization will get (potentially) random > results; Right, that is a bug that Andrew outlined in this leveling document I believe. We just pluck the cpuid results on whatever PCPU we were running when the toolstack constructs it. > 2) whatever set of values the kernel used, during guest boot, to build > up its internal scheduling data structures, has no guarantee of > being related to any value returned by CPUID, at a later point. Or after a migration. > > Hence, I think I'm seeing inconsistency between kernel and userspace > (and between userspace and itself, over time) already... Am I > overlooking something? No. I think that is bad. We should be providing the same data - unless pinning is used or any other mechanism is employed to change the cpuid. By default the cpuid ought to be same for a guest throughout its life (lets ignore PV which is 'special'). Also oddly, what is with the SMT threads? On my HVM guests, if I allocate four CPUs (vcpu=4), I get four cores and each core says it has four SMT threads? This is based on /proc/cpuinfo: [konrad@build-external linux]$ cat /proc/cpuinfo |grep "core id" core id : 0 core id : 1 core id : 2 core id : 3 [konrad@build-external linux]$ cat /proc/cpuinfo |grep "siblings" siblings : 4 siblings : 4 siblings : 4 siblings : 4 ? > > (I'll provide the same, for a PV guest, tomorrow.) > > Regards, > Dario > > -- > <<This happens because I choose it to happen!>> (Raistlin Majere) > ----------------------------------------------------------------- > Dario Faggioli, Ph.D, http://about.me/dario.faggioli > Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |