[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 00/10] vnuma introduction
On dom, 2014-07-20 at 10:57 -0400, Elena Ufimtseva wrote: > Running lstopo with vNUMA enabled in guest with 4 vnodes, 8 vcpus: > root@heatpipe:~# lstopo > > > Machine (7806MB) + L3 L#0 (7806MB 10MB) + L2 L#0 (7806MB 256KB) + L1d > L#0 (7806MB 32KB) + L1i L#0 (7806MB 32KB) > NUMANode L#0 (P#0 1933MB) + Socket L#0 > Core L#0 + PU L#0 (P#0) > Core L#1 + PU L#1 (P#4) > NUMANode L#1 (P#1 1967MB) + Socket L#1 > Core L#2 + PU L#2 (P#1) > Core L#3 + PU L#3 (P#5) > NUMANode L#2 (P#2 1969MB) + Socket L#2 > Core L#4 + PU L#4 (P#2) > Core L#5 + PU L#5 (P#6) > NUMANode L#3 (P#3 1936MB) + Socket L#3 > Core L#6 + PU L#6 (P#3) > Core L#7 + PU L#7 (P#7) > > > Basically, L2 and L1 are shared between nodes :) > > > I have manipulated cache sharing options before in cpuid but I agree > with Wei its just a part of the problem. > It is indeed. > Along with number of logical processor numbers (if HT is enabled), I > guess we need to construct apic ids (if its not done yet, I could not > find it) and > cache sharing cpuids maybe needed, taking into account pinning if set. > Well, I'm not sure. Thing is, this is a general issue, and we need to find a general way to solve it, where with "general" I mean not necessarily vNUMa related. Once we'll have that, we can see how to take care of vNUMA. I'm not sure I want to rely on pinning that much, as pinning can change and, if it does, we'd be back to square one playing tricks to the in guest's scheduler. Let's see what others think... Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |