[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC 26/31] xen/x86: Rework AMD masking MSR setup
On 22/01/16 09:27, Jan Beulich wrote: >>>> On 16.12.15 at 22:24, <andrew.cooper3@xxxxxxxxxx> wrote: >> This patch is best reviewed as its end result rather than as a diff, as it >> rewrites almost all of the setup. > This, I think, doesn't belong in the commit message itself. Why not? It applies equally to anyone reading the commit in tree. > >> @@ -126,126 +133,172 @@ static const struct cpuidmask *__init noinline >> get_cpuidmask(const char *opt) >> } >> >> /* >> - * Mask the features and extended features returned by CPUID. Parameters >> are >> - * set from the boot line via two methods: >> - * >> - * 1) Specific processor revision string >> - * 2) User-defined masks >> - * >> - * The processor revision string parameter has precedene. >> + * Sets caps in expected_levelling_cap, probes for the specified mask MSR, >> and >> + * set caps in levelling_caps if it is found. Processors prior to Fam 10h >> + * required a 32-bit password for masking MSRs. Reads the default value >> into >> + * msr_val. >> */ >> -static void __devinit set_cpuidmask(const struct cpuinfo_x86 *c) >> +static void __init __probe_mask_msr(unsigned int msr, uint64_t caps, >> + uint64_t *msr_val) >> { >> - static unsigned int feat_ecx, feat_edx; >> - static unsigned int extfeat_ecx, extfeat_edx; >> - static unsigned int l7s0_eax, l7s0_ebx; >> - static unsigned int thermal_ecx; >> - static bool_t skip_feat, skip_extfeat; >> - static bool_t skip_l7s0_eax_ebx, skip_thermal_ecx; >> - static enum { not_parsed, no_mask, set_mask } status; >> - unsigned int eax, ebx, ecx, edx; >> - >> - if (status == no_mask) >> - return; >> + unsigned int hi, lo; >> + >> + expected_levelling_cap |= caps; > Mixing indentation styles. Yes - sadly some crept in, despite my best efforts. I have already fixed all I can find in the series. > >> + >> + if ((rdmsr_amd_safe(msr, &lo, &hi) == 0) && >> + (wrmsr_amd_safe(msr, lo, hi) == 0)) >> + levelling_caps |= caps; > This logic assumes that faults are possible only because the MSR is > not there at all, not because of the actual value used. Is this a safe > assumption, the more that you don't use the "safe" variants > anymore at runtime? Yes - it is a safe assumption. The MSRs on AMD are at fixed indices and all specified to cover the full 32bit of a cpuid feature leaf. All we really care about is whether the MSR is present (and to read its defaults, if it is). If the MSR isn't present, levelling_caps won't be updated, and Xen will never touch the MSR again. > >> +/* >> + * Probe for the existance of the expected masking MSRs. They might easily >> + * not be available if Xen is running virtualised. >> + */ >> +static void __init noinline probe_masking_msrs(void) >> +{ >> + const struct cpuinfo_x86 *c = &boot_cpu_data; >> >> - ASSERT((status == not_parsed) && (c == &boot_cpu_data)); >> - status = no_mask; >> + /* >> + * First, work out which masking MSRs we should have, based on >> + * revision and cpuid. >> + */ >> >> /* Fam11 doesn't support masking at all. */ >> if (c->x86 == 0x11) >> return; >> >> - if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx & >> - opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx & >> - opt_cpuid_mask_l7s0_eax & opt_cpuid_mask_l7s0_ebx & >> - opt_cpuid_mask_thermal_ecx)) { >> - feat_ecx = opt_cpuid_mask_ecx; >> - feat_edx = opt_cpuid_mask_edx; >> - extfeat_ecx = opt_cpuid_mask_ext_ecx; >> - extfeat_edx = opt_cpuid_mask_ext_edx; >> - l7s0_eax = opt_cpuid_mask_l7s0_eax; >> - l7s0_ebx = opt_cpuid_mask_l7s0_ebx; >> - thermal_ecx = opt_cpuid_mask_thermal_ecx; >> - } else if (*opt_famrev == '\0') { >> + __probe_mask_msr(MSR_K8_FEATURE_MASK, LCAP_1cd, >> + &cpumask_defaults._1cd); >> + __probe_mask_msr(MSR_K8_EXT_FEATURE_MASK, LCAP_e1cd, >> + &cpumask_defaults.e1cd); >> + >> + if (c->cpuid_level >= 7) >> + __probe_mask_msr(MSR_AMD_L7S0_FEATURE_MASK, LCAP_7ab0, >> + &cpumask_defaults._7ab0); >> + >> + if (c->x86 == 0x15 && c->cpuid_level >= 6 && cpuid_ecx(6)) >> + __probe_mask_msr(MSR_AMD_THRM_FEATURE_MASK, LCAP_6c, >> + &cpumask_defaults._6c); >> + >> + /* >> + * Don't bother warning about a mismatch if virtualised. These MSRs >> + * are not architectural and almost never virtualised. >> + */ >> + if ((expected_levelling_cap == levelling_caps) || >> + cpu_has_hypervisor) >> return; >> - } else { >> - const struct cpuidmask *m = get_cpuidmask(opt_famrev); >> + >> + printk(XENLOG_WARNING "Mismatch between expected (%#x" >> + ") and real (%#x) levelling caps: missing %#x\n", > Breaking a format string inside parentheses is certainly a little odd. > Also I don't think the "missing" part is really useful, considering that > you already print both values used in its calculation. Not everyone can do multi-stage 32bit hex arithmetic in their head. With the current number ranges here, it is probably ok, but I wish to avoid it turning into error messages such as we have in hvm cr4 auditing, where it definitely takes some thought to decode. Or in other words, its no harm to provide and short-circuits the first step required in debugging the issue. > >> + expected_levelling_cap, levelling_caps, >> + (expected_levelling_cap ^ levelling_caps) & levelling_caps); >> + printk(XENLOG_WARNING "Fam %#x, model %#x level %#x\n", >> + c->x86, c->x86_model, c->cpuid_level); >> + printk(XENLOG_WARNING >> + "If not running virtualised, please report a bug\n"); > Well - you checked for running virtualized, so making it here when > running virtualized is already a bug (albeit not in the code here)? Why would it be a bug? The virtualising environment might provide these MSRs, in which case we should use them. It is just that a virtualising environment usually doesn't, at which point we shouldn't complain to the user that Xen's heuristics for determining the masking set needs updating. > >> +void amd_ctxt_switch_levelling(const struct domain *nextd) >> +{ >> + struct cpumasks *these_masks = &this_cpu(cpumasks); >> + const struct cpumasks *masks = &cpumask_defaults; >> + >> +#define LAZY(cap, msr, field) >> \ >> + ({ \ >> + if ( ((levelling_caps & cap) == cap) && \ >> + (these_masks->field != masks->field) ) \ >> + { \ >> + wrmsr_amd(msr, masks->field); \ >> + these_masks->field = masks->field; \ >> + } \ >> + }) >> + >> + LAZY(LCAP_1cd, MSR_K8_FEATURE_MASK, _1cd); >> + LAZY(LCAP_e1cd, MSR_K8_EXT_FEATURE_MASK, e1cd); >> + LAZY(LCAP_7ab0, MSR_AMD_L7S0_FEATURE_MASK, _7ab0); >> + LAZY(LCAP_6c, MSR_AMD_THRM_FEATURE_MASK, _6c); > So here we already have the first example where fully consistent > naming would allow elimination of a macro parameter. Token concatenation deliberately obscures code from tool like grep and cscope. There is already too much of the Xen source code obscured like this; I'd prefer not to add to it. > > And then, how is this supposed to work? You only restore defaults, > but never write non-default values. Namely, nextd is an unused > function parameter ... > > Also I guess my comment about adding unused code needs > repeating here. Future patches build on this, both using the parameter, and not using the defaults. I am also certain that if I did two patches, the first putting in a void function, and the second changing it to a pointer, your review would ask me to turn it into this. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |