[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 6/6] xen/x86: Add topology generator
On 01.05.2024 19:06, Alejandro Vallejo wrote: > On 26/03/2024 17:02, Jan Beulich wrote: >> On 09.01.2024 16:38, Alejandro Vallejo wrote: >>> --- a/tools/include/xenguest.h >>> +++ b/tools/include/xenguest.h >>> @@ -843,5 +843,20 @@ enum xc_static_cpu_featuremask { >>> XC_FEATUREMASK_HVM_HAP_DEF, >>> }; >>> const uint32_t *xc_get_static_cpu_featuremask(enum >>> xc_static_cpu_featuremask); >>> + >>> +/** >>> + * Synthesise topology information in `p` given high-level constraints >>> + * >>> + * Topology is given in various fields accross several leaves, some of >>> + * which are vendor-specific. This function uses the policy itself to >>> + * derive such leaves from threads/core and cores/package. >>> + * >>> + * @param p CPU policy of the domain. >>> + * @param threads_per_core threads/core. Doesn't need to be a power of >>> 2. >>> + * @param cores_per_package cores/package. Doesn't need to be a power of >>> 2. >>> + */ >>> +void xc_topo_from_parts(struct cpu_policy *p, >>> + uint32_t threads_per_core, uint32_t cores_per_pkg); >> >> Do we really want to constrain things to just two (or in fact any fixed >> number >> of) levels? Iirc on AMD there already can be up to 4. > > For the time being, I think we should keep it simple(ish). Perhaps, with (briefly) stating the reason(s) for doing so. >>> +void xc_topo_from_parts(struct cpu_policy *p, >>> + uint32_t threads_per_core, uint32_t cores_per_pkg) >>> +{ >>> + uint32_t threads_per_pkg = threads_per_core * cores_per_pkg; >>> + uint32_t apic_id_size; >>> + >>> + if ( p->basic.max_leaf < 0xb ) >>> + p->basic.max_leaf = 0xb; >>> + >>> + memset(p->topo.raw, 0, sizeof(p->topo.raw)); >>> + >>> + /* thread level */ >>> + p->topo.subleaf[0].nr_logical = threads_per_core; >>> + p->topo.subleaf[0].id_shift = 0; >>> + p->topo.subleaf[0].level = 0; >>> + p->topo.subleaf[0].type = 1; >>> + if ( threads_per_core > 1 ) >>> + p->topo.subleaf[0].id_shift = order(threads_per_core - 1); >>> + >>> + /* core level */ >>> + p->topo.subleaf[1].nr_logical = cores_per_pkg; >>> + if ( p->x86_vendor == X86_VENDOR_INTEL ) >>> + p->topo.subleaf[1].nr_logical = threads_per_pkg; >> >> Same concern as in the other patch regarding "== Intel". > > Can you please articulate the concern? See my replies to patch 5. Exactly the same applies here. >>> + p->topo.subleaf[1].id_shift = p->topo.subleaf[0].id_shift; >>> + p->topo.subleaf[1].level = 1; >>> + p->topo.subleaf[1].type = 2; >>> + if ( cores_per_pkg > 1 ) >>> + p->topo.subleaf[1].id_shift += order(cores_per_pkg - 1); >> >> Don't you want to return an error when any of the X_per_Y values is 0? > > I'd rather not. > > The checks on input parameters should be done wherever the inputs are > taken from. Currently the call site passes threads_per_core=1 and > cores_per_pkg=1+max_vcpus, so it's all guaranteed to work out. Hmm, and then have this function produce potentially bogus results? > Once it comes from xl, libxl should be in charge of the validations. > Furthermore there's validations the function simply cannot do (nor > should it) in its current form, like checking that... > > max_vcpus == threads_per_core * cores_per_pkg * n_pkgs. But that isn't an equality that needs to hold, is it? It would put constraints on exposing e.g. HT to a guest with an odd number of vCPU-s. Imo especially with core scheduling HT-ness wants properly reflecting from underlying hardware, so core-sched-like behavior in the guest itself can actually achieve its goals. >>> + apic_id_size = p->topo.subleaf[1].id_shift; >>> + >>> + /* >>> + * Contrary to what the name might seem to imply. HTT is an enabler for >>> + * SMP and there's no harm in setting it even with a single vCPU. >>> + */ >>> + p->basic.htt = true; >>> + >>> + p->basic.lppp = 0xff; >>> + if ( threads_per_pkg < 0xff ) >>> + p->basic.lppp = threads_per_pkg; >>> + >>> + switch ( p->x86_vendor ) >>> + { >>> + case X86_VENDOR_INTEL: >>> + struct cpuid_cache_leaf *sl = p->cache.subleaf; >>> + for ( size_t i = 0; sl->type && >>> + i < ARRAY_SIZE(p->cache.raw); i++, sl++ ) >>> + { >>> + sl->cores_per_package = cores_per_pkg - 1; >>> + sl->threads_per_cache = threads_per_core - 1; >> >> IOW the names in struct cpuid_cache_leaf aren't quite correct. > > Because of the - 1, you mean? Yes. The expressions above simply read as if there was an (obvious) off-by-1. > If anything our name is marginally clearer > than the SDM description. It goes on to say "Add one to the return value > to get the result" in a [**] note, so it's not something we made up. > > Xen: threads_per_cache => SDM: Maximum number of addressable IDs for > logical processors sharing this cache > > Xen: cores_per_package => SDM: Maximum number of addressable IDs for > processor cores in the physical package I'm afraid I don't follow what you're trying to justify here. Following SDM naming is generally advisable, yes, but only as far as no confusion results. Otherwise imo a better name wants picking, with the struct field declaration then accompanied by a comment clarifying the difference wrt SDM. >>> --- a/xen/arch/x86/cpu-policy.c >>> +++ b/xen/arch/x86/cpu-policy.c >>> @@ -278,9 +278,6 @@ static void recalculate_misc(struct cpu_policy *p) >>> >>> p->basic.raw[0x8] = EMPTY_LEAF; >>> >>> - /* TODO: Rework topology logic. */ >>> - memset(p->topo.raw, 0, sizeof(p->topo.raw)); >>> - >>> p->basic.raw[0xc] = EMPTY_LEAF; >>> >>> p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES; >>> @@ -387,6 +384,9 @@ static void __init calculate_host_policy(void) >>> recalculate_xstate(p); >>> recalculate_misc(p); >>> >>> + /* Wipe host topology. Toolstack is expected to synthesise a sensible >>> one */ >>> + memset(p->topo.raw, 0, sizeof(p->topo.raw)); >> >> I don't think this should be zapped from the host policy. It wants zapping >> from the guest ones instead, imo. The host policy may (will) want using in >> Xen itself, and hence it should reflect reality. > > Shouldn't Xen be checking its own cached state from the raw policy > instead? My understanding was that to a first approximation the host > policy is a template for guest creation. It already has a bunch of > overrides that don't match the real hardware configuration. No, raw policy is what comes from hardware, entirely unadjusted for e.g. command line options of Xen internal restrictions. Any decisions Xen takes for itself should be based on the host policy (whereas guest related decisions are to use the respective guest policy). Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |