[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PV-vNUMA issue: topology is misinterpreted by the guest



>>> On 16.07.15 at 12:32, <dario.faggioli@xxxxxxxxxx> wrote:
> root@test:~# numactl --hardware
> available: 2 nodes (0-1)
> node 0 cpus: 0 1
> node 0 size: 475 MB
> node 0 free: 382 MB
> node 1 cpus: 2 3
> node 1 size: 495 MB
> node 1 free: 475 MB
> node distances:
> node   0   1 
>   0:  10  10 
>   1:  20  10
> 
> root@test:~# cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list 
> 0-1
> root@test:~# cat /sys/devices/system/cpu/cpu0/topology/core_siblings_list 
> 0-3
> root@test:~# cat /sys/devices/system/cpu/cpu2/topology/thread_siblings_list 
> 2-3
> root@test:~# cat /sys/devices/system/cpu/cpu2/topology/core_siblings_list 
> 0-3
> 
> So the complain during boot seems to be against 'core_siblings' (which
> was not what I expected, but perhaps I misremember the meaning of
> "core_siblings" VS. "thread_siblings" VS. smt-siblings in Linux; I'll
> double check).
> 
> Anyway, is there anything we can do to fix or workaround things?

Make the guest honor topology also at the CPUID layer. Whether
that's by not wrongly consuming the respective CPUID bits (i.e. a
guest side change) or reflecting PV state in what the hypervisor
returns I'm not sure about. While the latter might be more clean,
I'd be afraid this might get in the way of what the tool stack wants
to see.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.