[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v1 00/16] xen: sched: implement core-scheduling



On Wed, 2018-10-17 at 15:36 -0600, Tamas K Lengyel wrote:
> On Fri, Aug 24, 2018 at 5:36 PM Dario Faggioli <dfaggioli@xxxxxxxx>
> wrote:
> > 
> > They give me a system that boots, where I can do basic stuff (like
> > playing with dom0, creating guests, etc), and where the constraint
> > of
> > only scheduling vcpus from one domain at a time on pcpus that are
> > part
> > of the same core is, as far as I've seen, respected.
> > 
> > There are still cases where the behavior is unideal, e.g., we could
> > make a better use of some of the cores which are, some of the
> > times,
> > left idle.
> > 
> > There are git branches here:
> >  https://gitlab.com/dfaggioli/xen.git rel/sched/core-scheduling-
> > RFCv1
> >  https://github.com/fdario/xen.git rel/sched/core-scheduling-RFCv1
> > 
> > Any comment is more than welcome.
> 
> Hi Dario,
>
Hi,

> thanks for the series, we are in the process of evaluating it in
> terms
> of performance. Our test is to setup 2 VMs each being assigned enough
> vCPUs to completely saturate all hyperthreads and then we fire up CPU
> benchmarking inside the VMs to spin each vCPU 100% (using swet). The
> idea is to force the scheduler to move vCPUs in-and-out constantly to
> see how much performance hit there would be with core-scheduling vs
> plain credit1 vs disabling hyperthreading. After running the test on
> a
> handful of machines it looks like we get the best performance with
> hyperthreading completely disabled, which is a bit unexpected. Have
> you or anyone else encountered this?
>
Do you mean that no-hyperthreading is better than core-scheduling, as
per this series goes?

Or do you mean that no-hyperthreading is better than plain Credit1,
with SMT enabled and *without* this series?

If the former, well, this series is not at a stage where it makes sense
to run performance benchmarks. Not even close, I would say.

If the latter, it's a bit weird, as I've often seen hyperthreading
causing seemingly weird performance figures, but it does not happen
very often that it actually slows down things (although, I wouldn't
rule it out! :-P).

Of course, this could also be a problem in the scheduler. Something I'd
do is (leaving this series aside):
- try to run the benchmark with half the number of vCPUs wrt the total
number of CPUs (i.e., with only 1 vCPU per core);
- try to run the benchmark as above, but explicitly pinning 1 vCPU on
each core;
- go back to nr vCPUs == nr pCPUs, but explicitly pin each vCPU to one
pCPU (i.e., to one thread, in this case).

You can do all the above with Credit, and results should already tell
us whether we may be dealing with some scheduling anomaly. If you want,
since you don't seem to have oversubscription, you can also try the
null scheduler, to have even more data points (in fact, one of the
purposes of having it was exactly this, i.e., making it possible to
have a reference for other schedulers to compare against).

BTW, when you say "2 VMs with enough vCPUs to saturate all
hyperthreads", does that include dom0 vCPUs?

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.