|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xl vcpu-pin peculiarities in core scheduling mode
On 24.03.20 14:34, Sergey Dyasli wrote: I think all of the effects can be explained by the way how pinning with core scheduling is implemented. This does not mean that the information presented to the user shouldn't be adapted. Basically pinning of any vcpu will just affect the "master"-vcpu of a virtual core (sibling 0). It will happily accept any setting as long as any "master"-cpu of a core is in the resulting set of cpus. All vcpus of a virtual core share the same pinnings. I think this explains all of the above scenarios. IMO there are the following possibilities for reporting those pinnings to the user: 1. As today, documenting the output. Not very nice IMO, but the least effort. 2. Just print one line for each virtual cpu/core/socket, like: Windows 10 (64-bit) (1) 5 0-1 0-1 -b- 1646.7 0-1 / all This has the disadvantage of dropping the per-vcpu time in favor of per-vcore time, OTOH this is reflecting reality. 3. Print the effective pinnings: Windows 10 (64-bit) (1) 5 0 0 -b- 1646.7 0 / all Windows 10 (64-bit) (1) 5 1 1 -b- 1646.7 1 / all Should be rather easy to do. Thoughts? Juergen
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |