[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at boot


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 18 Jun 2024 18:21:28 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 18 Jun 2024 16:21:39 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 18.06.2024 17:21, Andrew Cooper wrote:
> On 18/06/2024 11:05 am, Jan Beulich wrote:
>> On 17.06.2024 19:39, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/xstate.c
>>> +++ b/xen/arch/x86/xstate.c
>>> @@ -604,9 +604,164 @@ static bool valid_xcr0(uint64_t xcr0)
>>>      if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) )
>>>          return false;
>>>  
>>> +    /* TILECFG and TILEDATA must be the same. */
>>> +    if ( !(xcr0 & X86_XCR0_TILE_CFG) != !(xcr0 & X86_XCR0_TILE_DATA) )
>>> +        return false;
>>> +
>>>      return true;
>>>  }
>>>  
>>> +struct xcheck_state {
>>> +    uint64_t states;
>>> +    uint32_t uncomp_size;
>>> +    uint32_t comp_size;
>>> +};
>>> +
>>> +static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
>>> +{
>>> +    uint32_t hw_size;
>>> +
>>> +    BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES);
>>> +
>>> +    BUG_ON(s->states & new); /* States only increase. */
>>> +    BUG_ON(!valid_xcr0(s->states | new)); /* Xen thinks it's a good value. 
>>> */
>>> +    BUG_ON(new & ~(X86_XCR0_STATES | X86_XSS_STATES)); /* Known state. */
>>> +    BUG_ON((new & X86_XCR0_STATES) &&
>>> +           (new & X86_XSS_STATES)); /* User or supervisor, not both. */
>>> +
>>> +    s->states |= new;
>>> +    if ( new & X86_XCR0_STATES )
>>> +    {
>>> +        if ( !set_xcr0(s->states & X86_XCR0_STATES) )
>>> +            BUG();
>>> +    }
>>> +    else
>>> +        set_msr_xss(s->states & X86_XSS_STATES);
>>> +
>>> +    /*
>>> +     * Check the uncompressed size.  Some XSTATEs are out-of-order and 
>>> fill in
>>> +     * prior holes in the state area, so we check that the size doesn't
>>> +     * decrease.
>>> +     */
>>> +    hw_size = cpuid_count_ebx(0xd, 0);
>> Going forward, do we mean to get rid of XSTATE_CPUID? Else imo it should be
>> used here (and again below).
> 
> All documentation about CPUID, from the vendors and from secondary
> sources, is written in terms of numerals, not names.
> 
> XSTATE_CPUID is less meaningful than 0xd, and I would prefer to phase it
> out.

Fair enough; hence why I was asking.

>>> +    if ( hw_size < s->uncomp_size )
>>> +        panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, uncompressed hw 
>>> size %#x < prev size %#x\n",
>>> +              s->states, &new, hw_size, s->uncomp_size);
>>> +
>>> +    s->uncomp_size = hw_size;
>> Since XSS state doesn't affect uncompressed layout, this looks like largely
>> dead code for that case. Did you consider moving this into the if() above?
> 
> If that were a printk() rather than a panic(), then having the
> assignment in the if() would be wrong.

Would it? For an XSS component the uncompressed size isn't supposed to
change. (Or else doing an == check, as per below, wouldn't be an option.)

> So while it doesn't really matter given the way the logic is currently
> written, it's more code, and interferes with manual debugging to move
> the check into the if().
> 
>> Alternatively, should the comparison use == when dealing with XSS bits?
> 
> Hmm.  We probably can make this check work, given that we ascend through
> user states first, and the supervisor states second.
> 
> Although I'd need to rerun such a change through the entire hardware
> lab.  There have been enough unexpected surprises with "obvious" changes
> already.

Well, we can of course decide to go with what you have for now, and then
see about tightening the check. I fear though that doing so may then be
forgotten ...

>>> +    /*
>>> +     * Check the compressed size, if available.  All components strictly
>>> +     * appear in index order.  In principle there are no holes, but some
>>> +     * components have their base address 64-byte aligned for efficiency
>>> +     * reasons (e.g. AMX-TILE) and there are other components small enough 
>>> to
>>> +     * fit in the gap (e.g. PKRU) without increasing the overall length.
>>> +     */
>>> +    hw_size = cpuid_count_ebx(0xd, 1);
>>> +
>>> +    if ( cpu_has_xsavec )
>>> +    {
>>> +        if ( hw_size < s->comp_size )
>>> +            panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, compressed hw 
>>> size %#x < prev size %#x\n",
>>> +                  s->states, &new, hw_size, s->comp_size);
>> Unlike for uncompressed size, can't it be <= here, for - as the comment
>> says - it being strictly index order, and no component having zero size?
> 
> The first version of this patch did have <=, and it really failed on SPR.
> 
> When you activate AMX first, then PKRU next, PKRU really does fit in the
> alignment hole, and the overall compressed size is the same.
> 
> 
> It's a consequence of doing all the user states first, then all the
> supervisor states second.  I did have them strictly in index order to
> begin with, but then hit the enumeration issue on Broadwell and reworked
> xstate_check_sizes() to have a single common cpu_has_xsaves around all
> supervisor states.
> 
> I could potentially undo that, but the consequence is needing an
> cpu_has_xsaves in every call passing in X86_XSS_*.

Just to say as much: To me, doing things strictly in order would feel more
"natural" (even if, see below, the other reason that would have seemed
desirable, really has vaporized). Other than the need for cpu_has_xsaves,
is there a particular reason you split things as all XCR0 first, then all
XSS? (I'm a little worried anyway that this code will need updating for
each new component addition. Yet "automating" this won't work, because of
the need for the cpu_has_* checks.)

>>> +        s->comp_size = hw_size;
>>> +    }
>>> +    else if ( hw_size ) /* Compressed size reported, but no XSAVEC ? */
>>> +    {
>>> +        static bool once;
>>> +
>>> +        if ( !once )
>>> +        {
>>> +            WARN();
>>> +            once = true;
>>> +        }
>>> +    }
>>> +}
>>> +
>>> +/*
>>> + * The {un,}compressed XSTATE sizes are reported by dynamic CPUID value, 
>>> based
>>> + * on the current %XCR0 and MSR_XSS values.  The exact layout is also 
>>> feature
>>> + * and vendor specific.  Cross-check Xen's understanding against real 
>>> hardware
>>> + * on boot.
>>> + *
>>> + * Testing every combination is prohibitive, so we use a partial approach.
>>> + * Starting with nothing active, we add new XSTATEs and check that the 
>>> CPUID
>>> + * dynamic values never decreases.
>>> + */
>>> +static void __init noinline xstate_check_sizes(void)
>>> +{
>>> +    uint64_t old_xcr0 = get_xcr0();
>>> +    uint64_t old_xss = get_msr_xss();
>>> +    struct xcheck_state s = {};
>>> +
>>> +    /*
>>> +     * User XSTATEs, increasing by index.
>>> +     *
>>> +     * Chronologically, Intel and AMD had identical layouts for AVX (YMM).
>>> +     * AMD introduced LWP in Fam15h, following immediately on from YMM.  
>>> Intel
>>> +     * left an LWP-shaped hole when adding MPX (BND{CSR,REGS}) in Skylake.
>>> +     * AMD removed LWP in Fam17h, putting PKRU in the same space, breaking
>>> +     * layout compatibility with Intel and having a knock-on effect on all
>>> +     * subsequent states.
>>> +     */
>>> +    check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_FP);
>>> +
>>> +    if ( cpu_has_avx )
>>> +        check_new_xstate(&s, X86_XCR0_YMM);
>>> +
>>> +    if ( cpu_has_mpx )
>>> +        check_new_xstate(&s, X86_XCR0_BNDCSR | X86_XCR0_BNDREGS);
>>> +
>>> +    if ( cpu_has_avx512f )
>>> +        check_new_xstate(&s, X86_XCR0_HI_ZMM | X86_XCR0_ZMM | 
>>> X86_XCR0_OPMASK);
>>> +
>>> +    if ( cpu_has_pku )
>>> +        check_new_xstate(&s, X86_XCR0_PKRU);
>>> +
>>> +    if ( boot_cpu_has(X86_FEATURE_AMX_TILE) )
>>> +        check_new_xstate(&s, X86_XCR0_TILE_DATA | X86_XCR0_TILE_CFG);
>>> +
>>> +    if ( boot_cpu_has(X86_FEATURE_LWP) )
>>> +        check_new_xstate(&s, X86_XCR0_LWP);
>>> +
>>> +    /*
>>> +     * Supervisor XSTATEs, increasing by index.
>>> +     *
>>> +     * Intel Broadwell has Processor Trace but no XSAVES.  There doesn't
>>> +     * appear to have been a new enumeration when X86_XSS_PROC_TRACE was
>>> +     * introduced in Skylake.
>>> +     */
>>> +    if ( cpu_has_xsaves )
>>> +    {
>>> +        if ( cpu_has_proc_trace )
>>> +            check_new_xstate(&s, X86_XSS_PROC_TRACE);
>>> +
>>> +        if ( boot_cpu_has(X86_FEATURE_ENQCMD) )
>>> +            check_new_xstate(&s, X86_XSS_PASID);
>>> +
>>> +        if ( boot_cpu_has(X86_FEATURE_CET_SS) ||
>>> +             boot_cpu_has(X86_FEATURE_CET_IBT) )
>>> +        {
>>> +            check_new_xstate(&s, X86_XSS_CET_U);
>>> +            check_new_xstate(&s, X86_XSS_CET_S);
>>> +        }
>>> +
>>> +        if ( boot_cpu_has(X86_FEATURE_UINTR) )
>>> +            check_new_xstate(&s, X86_XSS_UINTR);
>>> +
>>> +        if ( boot_cpu_has(X86_FEATURE_ARCH_LBR) )
>>> +            check_new_xstate(&s, X86_XSS_LBR);
>>> +    }
>> In principle compressed state checking could be extended to also verify
>> the offsets are strictly increasing. That, however, would require to
>> interleave XCR0 and XSS checks, strictly by index. Did you consider (and
>> then discard) doing so?
> 
> What offsets are you referring to?
> 
> Compressed images have no offset information.  Every "row" which has
> ecx.xss set has ebx (offset) reported as 0.  The offset information for
> the user rows are only applicable for uncompressed images.

Hmm, right, nothing to compare our calculations against. And for the
compressed form the (calculated) offsets aren't any different from the
previous component's accumulated size.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.