[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v2 06/34] x86/msr: Use the alternatives mechanism to read PMC



On 4/22/2025 1:38 AM, Jürgen Groß wrote:
On 22.04.25 10:21, Xin Li (Intel) wrote:
To eliminate the indirect call overhead introduced by the pv_ops API,
use the alternatives mechanism to read PMC:

Which indirect call overhead? The indirect call is patched via the
alternative mechanism to a direct one.


See below.



     1) When built with !CONFIG_XEN_PV, X86_FEATURE_XENPV becomes a
        disabled feature, preventing the Xen PMC read code from being
        built and ensuring the native code is executed unconditionally.

Without CONFIG_XEN_PV CONFIG_PARAVIRT_XXL is not selected, resulting in
native code anyway.

Yes, this is kept in this patch, but in a little different way.



     2) When built with CONFIG_XEN_PV:

        2.1) If not running on the Xen hypervisor (!X86_FEATURE_XENPV),
             the kernel runtime binary is patched to unconditionally
             jump to the native PMC read code.

        2.2) If running on the Xen hypervisor (X86_FEATURE_XENPV), the
             kernel runtime binary is patched to unconditionally jump
             to the Xen PMC read code.

Consequently, remove the pv_ops PMC read API.

I don't see the value of this patch.

It adds more #ifdef and code lines without any real gain.

In case the x86 maintainers think it is still worth it, I won't object.

I think we want to totally bypass pv_ops in the case 2.1).

Do you mean the indirect call is patched to call native code *directly*
for 2.1?  I don't know it, can you please elaborate?

AFAIK, Xen PV has been the sole user of pv_ops for nearly 20 years. This
raises significant doubts about whether pv_ops provides Linux with the
value of being a well-abstracted "CPU" or "Platform".  And the x86
maintainers have said that it's a maintenance nightmare.

Thanks!
    Xin



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.