[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest



On 07/27/2015 12:54 PM, Andrew Cooper wrote:
On 27/07/15 11:43, George Dunlap wrote:
On Mon, Jul 27, 2015 at 5:35 AM, Juergen Gross <jgross@xxxxxxxx> wrote:
On 07/24/2015 06:44 PM, Boris Ostrovsky wrote:
On 07/24/2015 12:39 PM, Juergen Gross wrote:


I don't say mangling cpuids can't solve the scheduling problem. It
surely can. But it can't solve the scheduling problem without hiding
information like number of sockets or cores which might be required
for license purposes. If we don't care, fine.

(this is somewhat repeating the email I just sent)

Why can's we construct socket/core info with CPUID (and *possibly* ACPI
changes) that we present a reasonable (licensing-wise) picture?

Can you suggest an example where it will not work and then maybe we can
figure something out?

Let's assume a software with license based on core count. You have a
system with a 2 8 core processors and hyperthreads enabled, summing up
to 32 logical processors. Your license is valid for up to 16 cores, so
running the software on bare metal on your system is fine.

Now you are running the software inside a virtual machine with 24 vcpus
in a cpupool with 24 logical cpus limited to 12 cores (6 cores of each
processor). As we have to hide hyperthreading in order to not to have
to pin each vcpu to just a single logical processor, the topology
resulting from this picture will have to present 24 cores. The license
will not cover this hardware.
But how does doing a PV topology help this situation?  Because we're
telling one thing to the OS (via our PV interface) and another thing
to applications (via direct CPUID access)?

I expressed exactly these concerns right back at the start of the vnuma
work.

The OS and its userspace can and will use cpuid.  Most examples will
only use cpuid.  The only thing worse that providing no NUMA information
at all is providing conflicting information between cpuid and vnuma.

IMO, HVM guests should get all their NUMA information from the same
sources as native hardware would provide.  PV guests are admittedly
harder as in generally we cannot hide the real topology information in
cpuid.

Are you aware the same is true currently even without vNUMA?

The linux kernel (and other OS's as well) will make scheduling decisions
based on cpuid data obtained during boot. The information will be
correct only by chance and the real relation between vcpus and pcpus is
changing all the time.

So without adapting the kernel to that scenario it won't run optimal.
You can either change the data to let the kernel make some sane
decisions (cpuid mangling) or you can adapt the kernel somehow, e.g.
by modifying the kernel internal tables used for making scheduling
decisions (my proposal).

Something should be done regardless of the vNUMA support.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.