[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] xen: Remove buggy initial placement algorithm



On 15/07/16 19:02, George Dunlap wrote:
> The initial placement algorithm sometimes picks cpus outside of the
> mask it's given, does a lot of unnecessary bitmasking, does its own
> separate load calculation, and completely ignores vcpu hard and soft
> affinities.  Just get rid of it and rely on the schedulers to do
> initial placement.
>
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx>
> ---
> Since many of scheduler cpu_pick functions have a strong preference to
> just leave the cpu where it is (in particular, credit1 and rt), this
> may cause some cpus to be overloaded when creating a lot of domains.
> Arguably this should be fixed in the schedulers themselves.
>
> The core problem with default_vcpu0_location() is that it chooses its
> initial cpu based on the sibling of pcpu 0, not the first available
> sibling in the online mask; so if pcpu 1 ends up being less "busy"
> than all the cpus in the pool, then it ends up being chosen even
> though it's not in the pool.
>
> Fixing the algorithm would involve starting with the sibling map of
> cpumask_first(online) rather than 0, and then having all sibling
> checks not only test that the result of cpumask_next() < nr_cpu_ids,
> but that the result is in online.
>
> Additionally, as far as I can tell, the cpumask_test_cpu(i,
> &cpu_exclude_map) at the top of the for_each_cpu() loop can never
> return false; and this both this test and the cpumask_or() are
> unnecessary and should be removed.

Presumably the overloaded pcpu will quickly become less loaded as
work-stealing starts to happen?

As for default_vcpu0_location(), getting rid of it definitely looks like
a good move.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.