[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 03/27] x86/cpuid: Introduce struct cpuid_policy



On 04/01/17 14:22, Jan Beulich wrote:
>>>> On 04.01.17 at 13:39, <andrew.cooper3@xxxxxxxxxx> wrote:
>> struct cpuid_policy will eventually be a complete replacement for the 
>> cpuids[]
>> array, with a fixed layout and named fields to allow O(1) access to specific
>> information.
>>
>> For now, the CPUID content is capped at the 0xd and 0x8000001c leaves, which
>> matches the maximum policy that the toolstack will generate for a domain.
> Especially (but not only) leaf 0x17 and extended leaf 0x8000001e
> make me wonder whether this is a good starting point.

The starting point matches what the toolstack currently does.

I'd prefer to logically separate this series (reworking how the
hypervisor deals with CPUID data), from altering the default policy
given to guests, but I do agree that we should move in the direction you
suggest.

>
>> @@ -67,6 +80,55 @@ static void __init sanitise_featureset(uint32_t *fs)
>>                            (fs[FEATURESET_e1d] & ~CPUID_COMMON_1D_FEATURES));
>>  }
>>  
>> +static void __init calculate_raw_policy(void)
>> +{
>> +    struct cpuid_policy *p = &raw_policy;
>> +    unsigned int i;
>> +
>> +    cpuid_leaf(0, &p->basic.raw[0]);
>> +    for ( i = 1; i < min(ARRAY_SIZE(p->basic.raw),
>> +                         p->basic.max_leaf + 1ul); ++i )
>> +    {
>> +        /* Collected later. */
>> +        if ( i == 0x7 || i == 0xd )
>> +            continue;
>> +
>> +        cpuid_leaf(i, &p->basic.raw[i]);
> Leaves 2, 4, 0xb, and 0xf are, iirc, a multiple invocation ones too.
> There should at least be a comment here clarifying why they don't
> need treatment similar to 7 and 0xd.

Leaf 2 is magic.  It doesn't take a subleaf parameter, but may return
different information on repeated invocation.  I am half tempted to
require this to be a static leaf, which appears to be the case on most
hardware I have available to me.

The handling of leaf 4 is all per-cpu rather than per-domain, which is
why it isn't expressed in this structure.  That is going to require the
per-cpu topology work to do sensibly.  (There is a lot more CPUID work
than is just presented in this series, but it was frankly getting
unwieldy large.)

0xf isn't currently exposed (due to max_leaf being 0xd), and we don't
support PQM in guests yet.  I'd expect the work to expose it to guests
to add a new union, following the 0x7/0xd example.

>
>> +    }
>> +
>> +    if ( p->basic.max_leaf >= 7 )
>> +    {
>> +        cpuid_count_leaf(7, 0, &p->feat.raw[0]);
>> +
>> +        for ( i = 1; i < min(ARRAY_SIZE(p->feat.raw),
>> +                             p->feat.max_subleaf + 1ul); ++i )
>> +            cpuid_count_leaf(7, i, &p->feat.raw[i]);
>> +    }
>> +
>> +    if ( p->basic.max_leaf >= 0xd )
>> +    {
>> +        uint64_t xstates;
>> +
>> +        cpuid_count_leaf(0xd, 0, &p->xstate.raw[0]);
>> +        cpuid_count_leaf(0xd, 1, &p->xstate.raw[1]);
>> +
>> +        xstates = ((uint64_t)(p->xstate.xcr0_high | p->xstate.xss_high) << 
>> 32) |
>> +            (p->xstate.xcr0_low | p->xstate.xss_low);
>> +
>> +        for ( i = 2; i < 63; ++i )
>> +        {
>> +            if ( xstates & (1u << i) )
> 1ull

Oops yes.

>
>> @@ -65,6 +66,78 @@ extern struct cpuidmasks cpuidmask_defaults;
>>  /* Whether or not cpuid faulting is available for the current domain. */
>>  DECLARE_PER_CPU(bool, cpuid_faulting_enabled);
>>  
>> +#define CPUID_GUEST_NR_BASIC      (0xdu + 1)
>> +#define CPUID_GUEST_NR_FEAT       (0u + 1)
>> +#define CPUID_GUEST_NR_XSTATE     (62u + 1)
>> +#define CPUID_GUEST_NR_EXTD_INTEL (0x8u + 1)
>> +#define CPUID_GUEST_NR_EXTD_AMD   (0x1cu + 1)
>> +#define CPUID_GUEST_NR_EXTD       MAX(CPUID_GUEST_NR_EXTD_INTEL, \
>> +                                      CPUID_GUEST_NR_EXTD_AMD)
>> +
>> +struct cpuid_policy
>> +{
>> +    /*
>> +     * WARNING: During the CPUID transition period, not all information here
>> +     * is accurate.  The following items are accurate, and can be relied 
>> upon.
>> +     *
>> +     * Global *_policy objects:
>> +     *
>> +     * - Host accurate:
>> +     *   - max_{,sub}leaf
>> +     *   - {xcr0,xss}_{high,low}
>> +     *
>> +     * - Guest appropriate:
>> +     *   - Nothing
> I don't understand the meaning of the "accurate" above and
> the "appropriate" here.

This might make more sense in the context of patches 7 and 22, where we
end up with a mix of host values, unsanitised and sanitised guest
values.  This comment describes which values fall into which category,
and is updated across the series.

>
>> +     *
>> +     * Everything else should be considered inaccurate, and not necesserily 
>> 0.
>> +     */
>> +
>> +    /* Basic leaves: 0x000000xx */
>> +    union {
>> +        struct cpuid_leaf raw[CPUID_GUEST_NR_BASIC];
>> +        struct {
>> +            /* Leaf 0x0 - Max and vendor. */
>> +            struct {
>> +                uint32_t max_leaf, :32, :32, :32;
> These unnamed bitfields are here solely for the BUILD_BUG_ON()s?

They are to help my counting when adding structs for new leaves, but
they do get filled in with named fields later.

> Wouldn't it make sense to omit them here, and use > instead of !=
> there?

I tried it that way to start with, and got in a mess.

> Also is there really value in nesting unnamed structures like this?

It makes tools like `pahole` looking at struct cpuid_policy far clearer
to read.  But like above, it aids clarity, particularly when adding in
the higher numbered fields in later patches.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.