|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 10/16] xen: sched: use soft-affinity instead of domain's node-affinity
On 13/11/13 19:12, Dario Faggioli wrote: now that we have it, use soft affinity for scheduling, and replace the indirect use of the domain's NUMA node-affinity. This is more general, as soft affinity does not have to be related to NUMA. At the same time it allows to achieve the same results as NUMA-aware scheduling, just by making soft affinity equal to the domain's node affinity, for all the vCPUs (e.g., from the toolstack). This also means renaming most of the NUMA-aware scheduling related functions, in credit1, to something more generic, hinting toward the concept of soft affinity rather than directly to NUMA awareness. As a side effects, this simplifies the code quit a bit. In fact, prior to this change, we needed to cache the translation of d->node_affinity (which is a nodemask_t) to a cpumask_t, since that is what scheduling decisions require (we used to keep it in node_affinity_cpumask). This, and all the complicated logic required to keep it updated, is not necessary any longer. The high level description of NUMA placement and scheduling in docs/misc/xl-numa-placement.markdown is being updated too, to match the new architecture. signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> Just a few things to note below...
At this point, the only thing inside the spinlock is contingent on d->auto_node_affinity. diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index 398b095..0790ebb 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c ... At this point we've lost a way to make this check potentially much faster (being able to check auto_node_affinity). This isn't a super-hot path but it does happen fairly frequently -- will the "cpumask_full()" check take a significant amount of time on, say, a 4096-core system? If so, we might think about "caching" the results of cpumask_full() at some point. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |