[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH linux-next v2] x86/xen/time: prefer tsc as clocksource when it is invariant


  • To: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
  • From: Krister Johansen <kjlx@xxxxxxxxxxxxxxxxxx>
  • Date: Wed, 14 Dec 2022 10:01:47 -0800
  • Arc-authentication-results: i=1; rspamd-747d4f8b9f-rfb9v; auth=pass smtp.auth=dreamhost smtp.mailfrom=kjlx@xxxxxxxxxxxxxxxxxx
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=mailchannels.net; s=arc-2022; t=1671040909; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nzCxD9ep+HzlNnesq87i3wkKKSLnVJxdvoCsaYb8DkE=; b=EVHn8yeGHVF373vDppqxFAocgriXP9wkQO2gWgjfR8DBiyxTCXToikz0hnQmeStYR1fTYl WC2IB2/yNaipow3kBbqpLPS1elfShAZ5FCcDiFFeFrv8ma5YIIaeQS068BANtrQTkqnZ9r BaivQGNNWw9SPSlbLxtO7cgNb01IBHXvbU9A+ncls4X1HMYjVNGZcPBCn9GBzOsRVmlNRF fylijvsoLqknTjfajbIFTqIkrqcI6HELmKc1mKHO76Mz1ElZxivFRMbygyNziTn0kjAUp6 IroiWoWRG2yUGIUfDzCvIpP6z5oPCmps5aCL9LRxgUa1cE7RDZroNKUgF9X4QQ==
  • Arc-seal: i=1; s=arc-2022; d=mailchannels.net; t=1671040909; a=rsa-sha256; cv=none; b=KOXQpu/MCDoF8OavvYtI4yUGqWzUZwvgRAFLZms8+Wz0z2gP0bgnwiR+uCPMQaoIYVJrWZ +DgNBJlTzkmuWQQQMQGjwN2fTr1ztJFTxiIP1i0jcerAHCLsqxD7a2nwSsAYPucxgs847F u5HRpsCKP3u3PA3+/jnBU8WG1ITIJjCeXiTUJyShWmh8IQ4wNVxVt8coY8QCS66LI8100u R2gHO+XdMYDNE4pv/6xJ2zwdI9NpvZitkuq2pey2Wvlyj3yTeRGzOAZ5YP9eTjoR/LLoG5 zY1yCrYsxuZj4glMjLMwS/ASm7x5uIpYTm/72FyTpcD7hMzfNDPmiJMbHPjWjg==
  • Cc: Krister Johansen <kjlx@xxxxxxxxxxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxxxxx>, Borislav Petkov <bp@xxxxxxxxx>, Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>, x86@xxxxxxxxxx, "H. Peter Anvin" <hpa@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, Marcelo Tosatti <mtosatti@xxxxxxxxxx>, Anthony Liguori <aliguori@xxxxxxxxxx>, David Reaver <me@xxxxxxxxxxxxxxx>, Brendan Gregg <brendan@xxxxxxxxx>
  • Delivery-date: Wed, 14 Dec 2022 18:02:11 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Tue, Dec 13, 2022 at 04:25:32PM -0500, Boris Ostrovsky wrote:
> 
> On 12/12/22 5:09 PM, Krister Johansen wrote:
> > On Mon, Dec 12, 2022 at 01:48:24PM -0500, Boris Ostrovsky wrote:
> > > On 12/12/22 11:05 AM, Krister Johansen wrote:
> > > > diff --git a/arch/x86/include/asm/xen/cpuid.h 
> > > > b/arch/x86/include/asm/xen/cpuid.h
> > > > index 6daa9b0c8d11..d9d7432481e9 100644
> > > > --- a/arch/x86/include/asm/xen/cpuid.h
> > > > +++ b/arch/x86/include/asm/xen/cpuid.h
> > > > @@ -88,6 +88,12 @@
> > > >     *             EDX: shift amount for tsc->ns conversion
> > > >     * Sub-leaf 2: EAX: host tsc frequency in kHz
> > > >     */
> > > > +#define XEN_CPUID_TSC_EMULATED       (1u << 0)
> > > > +#define XEN_CPUID_HOST_TSC_RELIABLE  (1u << 1)
> > > > +#define XEN_CPUID_RDTSCP_INSTR_AVAIL (1u << 2)
> > > > +#define XEN_CPUID_TSC_MODE_DEFAULT   (0)
> > > > +#define XEN_CPUID_TSC_MODE_EMULATE   (1u)
> > > > +#define XEN_CPUID_TSC_MODE_NOEMULATE (2u)
> > > This file is a copy of Xen public interface so this change should go to 
> > > Xen first.
> > Ok, should I split this into a separate patch on the linux side too?
> 
> Yes. Once the Xen patch has been accepted you will either submit the same 
> patch for Linux or sync Linux file with Xen (if there are more differences).

Thanks.  Based upon the feedback I received from you and Jan, I may try
to shrink the check in xen_tsc_safe_clocksource() down a bit.  In that
case, I may only need to refer to a single field in the leaf that
provides this information.  In that case, are you alright with dropping
the change to the header and referring to the value directly, or would
you prefer that I proceed with adding these to the public API?

> > > > +static int __init xen_tsc_safe_clocksource(void)
> > > > +{
> > > > +       u32 eax, ebx, ecx, edx;
> > > > +
> > > > +       if (!(xen_hvm_domain() || xen_pvh_domain()))
> > > > +               return 0;
> > > > +
> > > > +       if (!(boot_cpu_has(X86_FEATURE_CONSTANT_TSC)))
> > > > +               return 0;
> > > > +
> > > > +       if (!(boot_cpu_has(X86_FEATURE_NONSTOP_TSC)))
> > > > +               return 0;
> > > > +
> > > > +       if (check_tsc_unstable())
> > > > +               return 0;
> > > > +
> > > > +       cpuid(xen_cpuid_base() + 3, &eax, &ebx, &ecx, &edx);
> > > > +
> > > > +       if (eax & XEN_CPUID_TSC_EMULATED)
> > > > +               return 0;
> > > > +
> > > > +       if (ebx != XEN_CPUID_TSC_MODE_NOEMULATE)
> > > > +               return 0;
> > > Why is the last test needed?
> > I was under the impression that if the mode was 0 (default) it would be
> > possible for the tsc to become emulated in the future, perhaps after a
> > migration.  The presence of the tsc_mode noemulate meant that we could
> > count on the falseneess of the XEN_CPUID_TSC_EMULATED check remaining
> > constant.
> 
> This will filter out most modern processors with TSC scaling support where in 
> default mode we don't intercept RDTCS after migration. But I don't think we 
> have proper interface to determine this so we don't have much choice but to 
> indeed make this check.

Yes, if this remains a single boot-time check, I'm not sure that knowing
whether the processor supports tsc scaling helps us.  If tsc_mode is
default, there's always a possibility of the tsc becoming emulated later
on as part of migration, correct?

The other thing that might be possible here is to add a background
timer that periodically checks if the tsc is still not emulated, and if
it suddenly becomes so, change the rating again to prefer the xen
clocksource.  I had written this off initially as an impractical
solution, since it seemed like a lot more mechanism and because it meant
the performance characteristics of the system would change without user
intervention.  However, if this seems like a good idea, I'm not opposed
to giving it a try.

-K



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.