[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 04/12] x86: maintain COS to CBM mapping for each socket
On 09/04/2015 10:18, Chao Peng wrote: > For each socket, a COS to CBM mapping structure is maintained for each > COS. The mapping is indexed by COS and the value is the corresponding > CBM. Different VMs may use the same CBM, a reference count is used to > indicate if the CBM is available. > > Signed-off-by: Chao Peng <chao.p.peng@xxxxxxxxxxxxxxx> > --- > xen/arch/x86/psr.c | 14 ++++++++++++++ > 1 file changed, 14 insertions(+) > > diff --git a/xen/arch/x86/psr.c b/xen/arch/x86/psr.c > index 16c37dd..4aff5f6 100644 > --- a/xen/arch/x86/psr.c > +++ b/xen/arch/x86/psr.c > @@ -21,11 +21,17 @@ > #define PSR_CMT (1<<0) > #define PSR_CAT (1<<1) > > +struct psr_cat_cbm { > + unsigned int ref; > + uint64_t cbm; > +}; > + > struct psr_cat_socket_info { > bool_t initialized; > bool_t enabled; > unsigned int cbm_len; > unsigned int cos_max; > + struct psr_cat_cbm *cos_cbm_map; "cos_to_cmb" would be more in keeping with Xen style, and IMO easier to read in code. > }; > > struct psr_assoc { > @@ -240,6 +246,14 @@ static void cat_cpu_init(unsigned int cpu) > info->cbm_len = (eax & 0x1f) + 1; > info->cos_max = (edx & 0xffff); Apologies for missing this in the previous patch, but cos_max should have a command line parameter like rmid_max if a lower limit wants to be enforced. Otherwise, Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > > + info->cos_cbm_map = xzalloc_array(struct psr_cat_cbm, > + info->cos_max + 1UL); > + if ( !info->cos_cbm_map ) > + return; > + > + /* cos=0 is reserved as default cbm(all ones). */ > + info->cos_cbm_map[0].cbm = (1ull << info->cbm_len) - 1; > + > info->enabled = 1; > printk(XENLOG_INFO "CAT: enabled on socket %u, cos_max:%u, > cbm_len:%u\n", > socket, info->cos_max, info->cbm_len); _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |