[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest
On 07/17/2015 03:27 AM, Dario Faggioli wrote: On Fri, 2015-07-17 at 07:09 +0100, Jan Beulich wrote:On 16.07.15 at 18:59, <boris.ostrovsky@xxxxxxxxxx> wrote:And in general (both for PV and HVM) --- is there any reason to expose CPU topology at all? I can see it being useful if VCPUs are pinned but if they are not then it can make performance worse.IndeedIndeed indeed. :-) And in fact, this is even independent from vNUMA. Yet, I remember we were discussing about this since the beginning of vNUMA work, back when it was Elena doing it, but the it seems we all forgot... Sorry for that! :-/ I seriously think we should do something about this as, while in a non vNUMA setup it can certainly cause weird/inconsistent performance, in a vNUMA one, as shown, it's quite a hige mess.- that's what our kernels have been doing for years, and it seems like someone over here is now looking into whether this could be done in pv-ops too (without too much uglification).That would be great, IMO. I'd be up for helping with this, but I know next to nothing about CPUID, so that would require some setup time. If, at least, you could keep me in the loop it would be great. In the meanwhile, what should we do? Document this? How? "don't use vNUMA with PV guest in SMT enabled systems" seems a bit harsh... Is there a workaround we can put in place/suggest? I haven't been able to reproduce this on my Intel box because I think I have different core enumeration. Can you try adding cpuid=['0x1:ebx=xxxxxxxx00000001xxxxxxxxxxxxxxxx'] to your config file?On AMD, BTW, we fail a different test so some other bits probably need to be tweaked. You may fail it too (the LLC sanity check). -boris _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |