[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v11 13/23] x86: refactor psr: CDP: implement CPU init flow.
>>> On 03.05.17 at 10:44, <yi.y.sun@xxxxxxxxxxxxxxx> wrote: > @@ -150,11 +151,28 @@ static DEFINE_PER_CPU(struct psr_assoc, psr_assoc); > * array creation. It is used to transiently store a spare node. > */ > static struct feat_node *feat_l3_cat; > +static struct feat_node *feat_l3_cdp; > > /* Common functions */ > #define cat_default_val(len) (0xffffffff >> (32 - (len))) > > /* > + * get_cdp_data - get DATA COS register value from input COS ID. > + * @feat: the feature node. > + * @cos: the COS ID. > + */ > +#define get_cdp_data(feat, cos) \ > + ( (feat)->cos_reg_val[(cos) * 2] ) Stray blanks inside parentheses. > +/* > + * get_cdp_code - get CODE COS register value from input COS ID. > + * @feat: the feature node. > + * @cos: the COS ID. > + */ > +#define get_cdp_code(feat, cos) \ > + ( (feat)->cos_reg_val[(cos) * 2 + 1] ) Same here. In fact, while I'm not going to ask to remove the parentheses, this is the one case where an expression a macro expands to does not strictly need an outer pair of parentheses, as suffix expressions have highest precedence anyway. > @@ -249,6 +267,25 @@ static void cat_init_feature(const struct cpuid_leaf > *regs, > > break; > > + case PSR_SOCKET_L3_CDP: > + { > + unsigned long val; As MSR values are specifically 64-bit ones, I think uint64_t would be more appropriate here. Depending on intended later additions to this function it may also be worthwhile making this a switch-wide variable. > + /* Cut half of cos_max when CDP is enabled. */ > + feat->cos_max >>= 1; > + > + /* We only write mask1 since mask0 is always all ones by default. */ Is this, btw, just reset state or even guaranteed after offlining and re-onlining a CPU? > + wrmsrl(MSR_IA32_PSR_L3_MASK(1), cat_default_val(feat->cbm_len)); > + rdmsrl(MSR_IA32_PSR_L3_QOS_CFG, val); > + wrmsrl(MSR_IA32_PSR_L3_QOS_CFG, val | (1 << > PSR_L3_QOS_CDP_ENABLE_BIT)); 1u at the very least, perhaps even 1ull. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |