[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest



On Tue, 2015-07-28 at 00:19 +0100, Andrew Cooper wrote:
> On 27/07/2015 18:42, Dario Faggioli wrote:
> > On Mon, 2015-07-27 at 17:33 +0100, Andrew Cooper wrote: >> On
> 27/07/15 17:31, David Vrabel wrote: >>> >>>> Yeah, indeed. That's the
> downside of Juergen's "Linux scheduler >>>> approach". But the issue
> is there, even without taking vNUMA into >>>> account, and I think
> something like that would really help (only for >>>> Dom0, and Linux
> guests, of course). >>> I disagree.  Whether we're using vNUMA or not,
> Xen should still ensure >>> that the guest kernel and userspace see a
> consistent and correct >>> topology using the native mechanisms. >> >>
> +1 >> > +1 from me as well. In fact, a mechanism for making exactly
> such thing > happen, was what I was after when starting the thread. >
> > Then it came up that CPUID needs to be used for at least two
> different > and potentially conflicting purposes, that we want to
> support both and > that, whether and for whatever reason it's used,
> Linux configures its > scheduler after it, potentially resulting in
> rather pathological setups.
> 
> I don't see what the problem is here.  Fundamentally, "NUMA optimise"
> vs "comply with licence" is a user/admin decision at boot time, and we
> need not cater to both halves at the same time.
>
> Supporting either, as chosen by the admin, is worthwhile.
>
I agree (with the added consideration that, as Juergen is saying, this
is not only related to [v]NUMA).

When optimising for placement/NUMA/scheduling/etc, I really don't see
any problem at all (once more, that's what I was after when starting the
thread!).

An issue might be that, when 'optimise for licenses', especially (but
not limiting to) if in combination with vNUMA, you risk putting the
guest scheduler in a really weird situation, hence the idea of taking
care of that in some other way.

Anyway, if Juergen manages to give us some numbers, we may have a better
picture of whether and how much this matters.

> > > > It's at that point that some decoupling started to appear >
> interesting... :-P > > Also, are we really being consistent? If my
> methodology is correct > (which might not be, please, double check,
> and sorry for that), I'm > seeing quite some inconsistency around: > >
> HOST: >  root@Zhaman:~# xl info -n >  ... >  cpu_topology           :
> >  cpu:    core    socket     node >    0:       0        1        0 >
> 1:       0        1        0 >    2:       1        1        0 >    3:
> 1        1        0 >    4:       9        1        0 >    5:       9
> 1        0 >    6:      10        1        0 >    7:      10        1
> 0 >    8:       0        0        1 >    9:       0        0        1
> >   10:       1        0        1 >   11:       1        0        1 >
> 12:       9        0        1 >   13:       9        0        1 >
> 14:      10        0        1 >   15:      10        0        1
> 
> o_O
>
> What kind of system results in this layout?  Can you dump the ACPI
> tables and make them available?
>
This is the test box I use for day to day development and testing...

I've done an acpidump (while on baremetal), and am attaching the result.
Let me know if I should do anything more or different.

If I can ask, what is it that looks weird?

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: acpi.log
Description: Text Data

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.