[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest



On 07/28/2015 06:17 PM, Dario Faggioli wrote:
On Tue, 2015-07-28 at 17:11 +0200, Juergen Gross wrote:
On 07/28/2015 06:29 AM, Juergen Gross wrote:

I'll make some performance tests on a big machine (4 sockets, 60 cores,
120 threads) regarding topology information:

- bare metal
- "random" topology (like today)
- "simple" topology (all vcpus regarded as equal)
- "real" topology with all vcpus pinned

This should show:

- how intrusive would the topology patch(es) be?
- what is the performance impact of a "wrong" scheduling data base

On the above box I used a pvops kernel 4.2-rc4 plus a rather small patch
(see attachment). I did 5 kernel builds in each environment:

make clean
time make -j 120

Right. If you have time, can you try '-j60' and '-j30' (maybe even -j45
and -j15, if you've got _a_lot_ of time! :-)).

The test machine can do this without me watching, so I've just started
the first configuration...

I'm asking this because, with hyperthreading involved, I've sometimes
seen things being the worse when *not* (over)saturating the CPU
capacity.

Hmm, oversaturation shouldn't happen here. I've added -j 240 to let it
happen.

...

So, basically, as far as Dom0 on my test box is concerned, "random"
actually matches the host topology.

Okay, have to check that on my box.

I think I'll have another try with a domU. This could be much more
"random" than dom0.

Sure, without pinning, this looks equally wrong, as Xen's scheduler can
well execute, say, vcpu 0 and vcpu 4, which are not siblings, on the
same core. But then again, if the load is small, it just won't happen
(e.g., if there are only those two busy vcpus, Xen will send them on
!siblings core), while if it's too hugh, it won't matter... :-/




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.