[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Fwd: [tip:x86/urgent] x86, sched: Allow topologies where NUMA nodes share an LLC
On 4/17/18 6:02 PM, Dario Faggioli wrote: On Tue, 2018-04-17 at 15:08 +0100, Andrew Cooper wrote:On 17/04/18 15:02, Juergen Gross wrote:Is this something we should be aware of in Xen, too?If we had something close to a working topology representation, probably.True, as far as letting the guest know about these details (in appropriate circumnstaces) goes. About using it _within_ Xen, well: "Intel's Skylake Server CPUs have a different LLC topology than previous generations. When in Sub-NUMA-Clustering (SNC) mode, the package is divided into two "slices", each containing half the cores, half the LLC, and one memory controller and each slice is enumerated to Linux as a NUMA node. This is similar to how the cores and LLC were arranged for the Cluster-On-Die (CoD) feature." This looks similar (but not identical) to, e.g., AMD EPYC. If Xen also sees each slice as a NUMA node as well, we already are doing something (although, of course, we can always improve). If not, I see ways of taking advantage of the information that not all the cores in the socket equally share the LLC, e.g., in Credit2, by providing a per-LLC runqueue option and/or by taking that into account in the 'migration resistance' mechanisms (there is already work in that direction). Yeah, per-LLC runqueue option sounds like a good option. We can look into that.Yes, I am working on dynamically definining migration resistance as a factor of cache and cpu topology ie. instead of having a constant value of migration resistance it will consider the placement of cpus, cache and sharing of cache among cpus. Regards, Dario Anshul _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |