[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 6/6] xen/x86: Add topology generator



On 26/03/2024 17:02, Jan Beulich wrote:
> On 09.01.2024 16:38, Alejandro Vallejo wrote:
>> --- a/tools/include/xenguest.h
>> +++ b/tools/include/xenguest.h
>> @@ -843,5 +843,20 @@ enum xc_static_cpu_featuremask {
>>      XC_FEATUREMASK_HVM_HAP_DEF,
>>  };
>>  const uint32_t *xc_get_static_cpu_featuremask(enum 
>> xc_static_cpu_featuremask);
>> +
>> +/**
>> + * Synthesise topology information in `p` given high-level constraints
>> + *
>> + * Topology is given in various fields accross several leaves, some of
>> + * which are vendor-specific. This function uses the policy itself to
>> + * derive such leaves from threads/core and cores/package.
>> + *
>> + * @param p                   CPU policy of the domain.
>> + * @param threads_per_core    threads/core. Doesn't need to be a power of 2.
>> + * @param cores_per_package   cores/package. Doesn't need to be a power of 
>> 2.
>> + */
>> +void xc_topo_from_parts(struct cpu_policy *p,
>> +                        uint32_t threads_per_core, uint32_t cores_per_pkg);
> 
> Do we really want to constrain things to just two (or in fact any fixed number
> of) levels? Iirc on AMD there already can be up to 4.

For the time being, I think we should keep it simple(ish).

Leaf 0xb is always 2 levels, and it implicitly defines the offset into
the x2APIC ID for the 3rd level (the package, or the die). This approach
can be used for a long time before we need to expose anything else. It
can already be used for exposing multi-socket configurations, but it's
not properly wired to xl.

I suspect we won't have a need to expose more complicated topologies
until hetero systems are more common, and by that time the generator
will need tweaking anyway.

> 
>> @@ -1028,3 +976,89 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, 
>> xc_cpu_policy_t *host,
>>  
>>      return false;
>>  }
>> +
>> +static uint32_t order(uint32_t n)
>> +{
>> +    return 32 - __builtin_clz(n);
>> +}
> 
> This isn't really portable. __builtin_clz() takes an unsigned int, which may
> in principle be wider than 32 bits. I also can't see a reason for the
> function to have a fixed-width return type. Perhaps

Sure to unsigned int. I'll s/CHAR_BIT/8/ as that's mandated by POSIX
already.

> 
> static unsigned int order(unsigned int n)
> {
>     return sizeof(n) * CHAR_BIT - __builtin_clz(n);
> }
> 
> ?
> 
>> +void xc_topo_from_parts(struct cpu_policy *p,
>> +                        uint32_t threads_per_core, uint32_t cores_per_pkg)
>> +{
>> +    uint32_t threads_per_pkg = threads_per_core * cores_per_pkg;
>> +    uint32_t apic_id_size;
>> +
>> +    if ( p->basic.max_leaf < 0xb )
>> +        p->basic.max_leaf = 0xb;
>> +
>> +    memset(p->topo.raw, 0, sizeof(p->topo.raw));
>> +
>> +    /* thread level */
>> +    p->topo.subleaf[0].nr_logical = threads_per_core;
>> +    p->topo.subleaf[0].id_shift = 0;
>> +    p->topo.subleaf[0].level = 0;
>> +    p->topo.subleaf[0].type = 1;
>> +    if ( threads_per_core > 1 )
>> +        p->topo.subleaf[0].id_shift = order(threads_per_core - 1);
>> +
>> +    /* core level */
>> +    p->topo.subleaf[1].nr_logical = cores_per_pkg;
>> +    if ( p->x86_vendor == X86_VENDOR_INTEL )
>> +        p->topo.subleaf[1].nr_logical = threads_per_pkg;
> 
> Same concern as in the other patch regarding "== Intel".
> 

Can you please articulate the concern?

>> +    p->topo.subleaf[1].id_shift = p->topo.subleaf[0].id_shift;
>> +    p->topo.subleaf[1].level = 1;
>> +    p->topo.subleaf[1].type = 2;
>> +    if ( cores_per_pkg > 1 )
>> +        p->topo.subleaf[1].id_shift += order(cores_per_pkg - 1);
> 
> Don't you want to return an error when any of the X_per_Y values is 0?

I'd rather not.

The checks on input parameters should be done wherever the inputs are
taken from. Currently the call site passes threads_per_core=1 and
cores_per_pkg=1+max_vcpus, so it's all guaranteed to work out.

Once it comes from xl, libxl should be in charge of the validations.
Furthermore there's validations the function simply cannot do (nor
should it) in its current form, like checking that...

    max_vcpus == threads_per_core * cores_per_pkg * n_pkgs.

> 
>> +    apic_id_size = p->topo.subleaf[1].id_shift;
>> +
>> +    /*
>> +     * Contrary to what the name might seem to imply. HTT is an enabler for
>> +     * SMP and there's no harm in setting it even with a single vCPU.
>> +     */
>> +    p->basic.htt = true;
>> +
>> +    p->basic.lppp = 0xff;
>> +    if ( threads_per_pkg < 0xff )
>> +        p->basic.lppp = threads_per_pkg;
>> +
>> +    switch ( p->x86_vendor )
>> +    {
>> +        case X86_VENDOR_INTEL:
>> +            struct cpuid_cache_leaf *sl = p->cache.subleaf;
>> +            for ( size_t i = 0; sl->type &&
>> +                                i < ARRAY_SIZE(p->cache.raw); i++, sl++ )
>> +            {
>> +                sl->cores_per_package = cores_per_pkg - 1;
>> +                sl->threads_per_cache = threads_per_core - 1;
> 
> IOW the names in struct cpuid_cache_leaf aren't quite correct.

Because of the - 1, you mean? If anything our name is marginally clearer
than the SDM description. It goes on to say "Add one to the return value
to get the result" in a [**] note, so it's not something we made up.

Xen: threads_per_cache => SDM: Maximum number of addressable IDs for
logical processors sharing this cache

Xen: cores_per_package => SDM: Maximum number of addressable IDs for
processor cores in the physical package

> 
>> +                if ( sl->type == 3 /* unified cache */ )
>> +                    sl->threads_per_cache = threads_per_pkg - 1;
>> +            }
>> +            break;
>> +
>> +        case X86_VENDOR_AMD:
>> +        case X86_VENDOR_HYGON:
>> +            /* Expose p->basic.lppp */
>> +            p->extd.cmp_legacy = true;
>> +
>> +            /* Clip NC to the maximum value it can hold */
>> +            p->extd.nc = 0xff;
>> +            if ( threads_per_pkg <= 0xff )
>> +                p->extd.nc = threads_per_pkg - 1;
>> +
>> +            /* TODO: Expose leaf e1E */
>> +            p->extd.topoext = false;
>> +
>> +            /*
>> +             * Clip APIC ID to 8, as that's what high core-count machines do
> 
> Nit: "... to 8 bits, ..."

ack

> 
>> +             *
>> +             * That what AMD EPYC 9654 does with >256 CPUs
>> +             */
>> +            p->extd.apic_id_size = 8;
>> +            if ( apic_id_size < 8 )
>> +                p->extd.apic_id_size = apic_id_size;
> 
> Use min(), with apic_id_size's type suitably adjusted?

ack

> 
>> --- a/xen/arch/x86/cpu-policy.c
>> +++ b/xen/arch/x86/cpu-policy.c
>> @@ -278,9 +278,6 @@ static void recalculate_misc(struct cpu_policy *p)
>>  
>>      p->basic.raw[0x8] = EMPTY_LEAF;
>>  
>> -    /* TODO: Rework topology logic. */
>> -    memset(p->topo.raw, 0, sizeof(p->topo.raw));
>> -
>>      p->basic.raw[0xc] = EMPTY_LEAF;
>>  
>>      p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
>> @@ -387,6 +384,9 @@ static void __init calculate_host_policy(void)
>>      recalculate_xstate(p);
>>      recalculate_misc(p);
>>  
>> +    /* Wipe host topology. Toolstack is expected to synthesise a sensible 
>> one */
>> +    memset(p->topo.raw, 0, sizeof(p->topo.raw));
> 
> I don't think this should be zapped from the host policy. It wants zapping
> from the guest ones instead, imo. The host policy may (will) want using in
> Xen itself, and hence it should reflect reality.
> 
> Jan

Shouldn't Xen be checking its own cached state from the raw policy
instead? My understanding was that to a first approximation the host
policy is a template for guest creation. It already has a bunch of
overrides that don't match the real hardware configuration.

Cheers,
Alejandro




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.