[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ongoing/future speculative mitigation work





On Fri, Oct 26, 2018, 1:49 AM Dario Faggioli <dfaggioli@xxxxxxxx> wrote:
On Thu, 2018-10-25 at 12:35 -0600, Tamas K Lengyel wrote:
> On Thu, Oct 25, 2018 at 12:13 PM Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx> wrote:
> >
> > TBH, I'd perhaps start with an admin control which lets them switch
> > between the two modes, and some instructions on how/why they might
> > want
> > to try switching.
> >
> > Trying to second-guess the best HT setting automatically is most
> > likely
> > going to be a lost cause.  It will be system specific as to whether
> > the
> > same workload is better with or without HT.
>
> This may just not be practically possible at the end as the system
> administrator may have no idea what workload will be running on any
> given system. It may also vary between one user to the next on the
> same system, without the users being allowed to tune such details of
> the system. If we can show that with core-scheduling deployed for
> most
> workloads performance is improved by x % it may be a safe option.
>
I haven't done this kind of benchmark yet, but I'd say that, if every
vCPU of every domain is doing 100% CPU intensive work, core-scheduling
isn't going to make much difference, or help you much, as compared to
regular scheduling with hyperthreading enabled.

Understood, we actually went into the this with the assumption that in such cases core-scheduling would underperform plain credit1. The idea was to measure the worst case with plain scheduling and with core-scheduling to be able to see the difference clearly between the two.


Actual numbers may vary depending on whether VMs have odd or even
number of vCPUs but, e.g., on hardware with 2 threads per core, and
using VMs with at least 2 vCPUs each, the _perfect_ implementation of
core-scheduling would still manage to keep all the *threads* busy,
which is --as far as our speculations currently go-- what is causing
the performance degradation you're seeing.

So, again, if it is confirmed that this workload of yours is a
particularly bad one for SMT, then you are just better off disabling
hyperthreading. And, no, I don't think such a situation is common
enough to say "let's disable for everyone by default".

I wasn't asking to make it the default in Xen but if we make it the default for our deployment where such workloads are entirely possible, would that be reasonable. Again, we don't know the workload and we can't predict it. We were hoping to use core-scheduling eventually but it was not expected that hyperthreading can cause such drops in performance. If there are tests that I can run which are the "best case" for hyperthreading, I would like to repeat those tests to see where we are.

Thanks,
Tamas
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.