[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: sched_credit: filter node-affinity mask against online cpus



On 16/09/13 16:23, Dario Faggioli wrote:
On lun, 2013-09-16 at 15:00 +0100, George Dunlap wrote:
On 13/09/13 17:09, Dario Faggioli wrote:

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
index dbe6de6..d76cd88 100644
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -627,10 +638,18 @@ _csched_cpu_pick(const struct scheduler *ops, struct vcpu 
*vc, bool_t commit)
       int balance_step;
online = cpupool_scheduler_cpumask(vc->domain->cpupool);
+    cpumask_and(&cpus, vc->cpu_affinity, online);
+
       for_each_csched_balance_step( balance_step )
       {
+        /*
+         * We filter the node-affinity mask against
+         * [vc->cpu_affinity && online] here to avoid problems if all
+         * the cpus in in the node-affinity mask are offline (e.g.,
+         * because the domain moved to a different cpupool).
+         */
It's probably worth mentioning that the reason you do the cpumask_and
here and not in the other place this is called is that cpumask_cycle is
called after this one (which will choke on an empy mask), but not in the
other place it's called.

Well, it is indeed what the comment says, or at leas what I wanted it to
say. :-)

I tied to concentrate on the actual issue, which is avoid picking an
offline cpu, independently on which specific ASSERT is the one that
triggers.

Ok, but the question is, why AND it with online cpus here, but not in the other place? If the "actual issue" is that the mask may be empty, it should be done in both places. If the "actual issue" is that cpu_cycle() chokes on empty masks, then that's an important think to point out, so poor programmers don't get confused as to why it's different. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.