[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 0/3] Enable L2 Cache Allocation Technology
Design document is below: ======================================================================= % Intel L2 Cache Allocation Technology (L2 CAT) Feature % Revision 1.0 \clearpage Hi all, We plan to bring a new PSR (Platform Shared Resource) feature called Intel L2 Cache Allocation Technology (L2 CAT) to Xen. This is the initial design of L2 CAT. It might be a little long and detailed, hope it doesn't matter. Besides the L2 CAT implementaion, we refactor the psr.c to make it more flexible to add new features and fulfill the principle, open for extension but closed for modification. Comments and suggestions are welcome :-) # Basics ---------------- ---------------------------------------------------- Status: **Tech Preview** Architecture(s): Intel x86 Component(s): Hypervisor, toolstack Hardware: Atom codename Goldmont and beyond ---------------- ---------------------------------------------------- # Overview L2 CAT allows an OS or Hypervisor/VMM to control allocation of a CPU's shared L2 cache based on application priority or Class of Service (COS). Each CLOS is configured using capacity bitmasks (CBM) which represent cache capacity and indicate the degree of overlap and isolation between classes. Once L2 CAT is configured, the processor allows access to portions of L2 cache according to the established class of service (COS). # Technical information L2 CAT is a member of Intel PSR features and part of CAT, it shares some base PSR infrastructure in Xen. ## Hardware perspective L2 CAT defines a new range MSRs to assign different L2 cache access patterns which are known as CBMs (Capacity BitMask), each CBM is associated with a COS. ``` +----------------------------+----------------+ IA32_PQR_ASSOC | MSR (per socket) | Address | +----+---+-------+ +----------------------------+----------------+ | |COS| | | IA32_L2_QOS_MASK_0 | 0xD10 | +----+---+-------+ +----------------------------+----------------+ └-------------> | ... | ... | +----------------------------+----------------+ | IA32_L2_QOS_MASK_n | 0xD10+n (n<64) | +----------------------------+----------------+ ``` When context switch happens, the COS of VCPU is written to per-thread MSR `IA32_PQR_ASSOC`, and then hardware enforces L2 cache allocation according to the corresponding CBM. ## The relationship between L2 CAT and L3 CAT/CDP L2 CAT is independent of L3 CAT/CDP, which means L2 CAT would be enabled while L3 CAT/CDP is disabled, or L2 CAT and L3 CAT/CDP are all enabled. L2 CAT uses a new range CBMs from 0xD10 ~ 0xD10+n (n<64), following by the L3 CAT/CDP CBMs, and supports setting different L2 cache accessing patterns from L3 cache. N.B. L2 CAT and L3 CAT/CDP share the same COS field in the same associate register `IA32_PQR_ASSOC`, which means one COS associate to a pair of L2 CBM and L3 CBM. Besides, the max COS of L2 CAT may be different from L3 CAT/CDP (or other PSR features in future). In some cases, a VM is permitted to have a COS that is beyond one (or more) of PSR features but within the others. For instance, let's assume the max COS of L2 CAT is 8 but the max COS of L3 CAT is 16, when a VM is assigned 9 as COS, the L3 CBM associated to COS 9 would be enforced, but for L2 CAT, the behavior is fully open (no limit) since COS 9 is beyond the max COS (8) of L2 CAT. ## Design Overview * Core COS/CBM association When enforcing L2 CAT, all cores of domains have the same default COS (COS0) which associated to the fully open CBM (all ones bitmask) to access all L2 cache. The default COS is used only in hypervisor and is transparent to tool stack and user. System administrator can change PSR allocation policy at runtime by tool stack. Since L2 CAT share COS with L3 CAT/CDP, a COS corresponds to a 2-tuple, like [L2 CBM, L3 CBM] with only-CAT enabled, when CDP is enabled, one COS corresponds to a 3-tuple, like [L2 CBM, L3 Code_CBM, L3 Data_CBM]. If neither L3 CAT nor L3 CDP is enabled, things would be easier, one COS corresponds to one L2 CBM. * VCPU schedule This part reuses L3 CAT COS infrastructure. * Multi-sockets Different sockets may have different L2 CAT capability (e.g. max COS) although it is consistent on the same socket. So the capability of per-socket L2 CAT is specified. ## Implementation Description * Hypervisor interfaces: 1. Ext: Boot line parameter "psr=cat" now will enable L2 CAT and L3 CAT if hardware supported. 2. New: SYSCTL: - XEN_SYSCTL_PSR_CAT_get_l2_info: Get L2 CAT information. 3. New: DOMCTL: - XEN_DOMCTL_PSR_CAT_OP_GET_L2_CBM: Get L2 CBM for a domain. - XEN_DOMCTL_PSR_CAT_OP_SET_L2_CBM: Set L2 CBM for a domain. * xl interfaces: 1. Ext: psr-cat-show -l2 domain-id Show L2 cbm for a domain. => XEN_SYSCTL_PSR_CAT_get_l2_info / XEN_DOMCTL_PSR_CAT_OP_GET_L2_CBM 2. Ext: psr-mba-set -l2 domain-id cbm Set L2 cbm for a domain. => XEN_DOMCTL_PSR_CAT_OP_SET_L2_CBM 3. Ext: psr-hwinfo Show PSR HW information, including L2 CAT => XEN_SYSCTL_PSR_CAT_get_l2_info * Key data structure: 1. Feature list ``` struct feat_list { unsigned int feature; struct feat_ops ops; unsigned int feat_info[MAX_FEAT_INFO_SIZE]; uint64_t cos_reg_val[MAX_COS_REG_NUM]; struct feat_list *pNext; }; ``` When a PSR enforcement feature is enabled, it will be added into a feature list. The head of the list is created in psr initialization. - Member `feature` `feature` is an integer number, to indicate which feature the list member corresponds to. - Member `ops` `ops` maintains a callback function list of the feature. It will be introduced in details later. - Member `feat_info` `feat_info` maintains the feature HW information which can be got through psr_hwinfo command. - Member `cos_reg_val` `cos_ref_val` is an array to maintain the value set in all COS registers of the feature. 2. Per-socket PSR allocation features information structure ``` struct psr_socket_alloc_info { unsigned int features; unsigned int nr_feat; struct feat_list *pFeat; struct psr_ref *cos_ref; spinlock_t mask_lock; }; ``` We collect all PSR allocation features information of a socket in this `struct psr_socket_alloc_info`. - Member `features` `features` is a bitmap, to indicate which feature is enabled on current socket. We define `features` bitmap as: bit 0~1: L3 CAT status, [01] stands for L3 CAT only and [10] stands for L3 CDP is enalbed. bit 2: L2 CAT status. - Member `pFeat` `pFeat` is the head of the feature list which is enabled on the socket. - Member `cos_ref` `cos_ref` is an array which maintains the reference of one COS. If the COS is used by one domain, the reference will increase one. If a domain releases the COS, the reference will decrease one. The array is indexed by COS. 3. Feature operation functions structure ``` struct feat_ops { void (*init_feature)(unsigned int eax, unsigned int ebx, unsigned int ecx, unsigned int edx, struct feat_list *pFeat, struct psr_socket_alloc_info *info); int (*get_old_set_new)(uint64_t *mask, struct feat_list *pFeat, unsigned int old_cos, enum mask_type type, uint64_t m); int (*compare_mask)(uint64_t *mask, struct feat_list *pFeat, unsigned int cos, bool_t *found); unsigned int (*get_cos_max_as_type)(struct feat_list *pFeat, enum mask_type type); unsigned int (*exceed_range)(uint64_t *mask, struct feat_list *pFeat, unsigned int cos); int (*write_msr)(unsigned int cos, uint64_t *mask, struct feat_list *pFeat); int (*get_val)(struct feat_list *pFeat, unsigned int cos, enum mask_type type, uint64_t *val); int (*get_feat_info)(struct feat_list *pFeat, enum mask_type type, uint32_t *dat0, uint32_t *dat1, uint32_t *dat2); unsigned int (*get_max_cos_max)(struct feat_list *pFeat); }; ``` We abstract above callback functions to encapsulate the feature specific behaviors into them. Then, it is easy to add a new feature. We just need: 1) Implement such ops and callback functions for every feature. 2) Register the ops into `struct feat_list`. 3) Add the feature into feature list during CPU initialization. # User information * Feature Enabling: Add "psr=cat" to boot line parameter to enable all supported level CAT features. * xl interfaces: 1. `psr-cat-show [OPTIONS] domain-id`: Show domain L2 or L3 CAT CBM. New option `-l` is added. `-l2`: Specify cbm for L2 cache. `-l3`: Specify cbm for L3 cache. If neither `-l2` nor `-l3` is given, LLC (Last Level Cache) is default behavior. 2. `psr-cat-cbm-set [OPTIONS] domain-id cbm`: Set domain L2 or L3 CBM. New option `-l` is added. `-l2`: Specify cbm for L2 cache. `-l3`: Specify cbm for L3 cache. If neither `-l2` nor `-l3` is given, LLC (Last Level Cache) is default behavior. Since there is no CDP support on L2 cache, so if `-l2`, `--code` or `--data` are both given, error message shows. 3. `psr-hwinfo [OPTIONS]`: Show L2 & L3 CAT HW informations on every socket. # References [Intel® 64 and IA-32 Architectures Software Developer Manuals](http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html) # History ------------------------------------------------------------------------ Date Revision Version Notes ---------- -------- -------- ------------------------------------------- 2016-08-12 1.0 Xen 4.7 Initial design ---------- -------- -------- ------------------------------------------- Yi Sun (3): x86: refactor psr implementation in hypervisor. x86: add support for L2 CAT in hypervisor. tools & docs: add L2 CAT support in tools and docs. docs/man/xl.pod.1.in | 25 +- docs/misc/xl-psr.markdown | 10 +- tools/libxc/include/xenctrl.h | 7 +- tools/libxc/xc_psr.c | 28 +- tools/libxl/libxl.h | 2 + tools/libxl/libxl_psr.c | 50 +- tools/libxl/libxl_types.idl | 1 + tools/libxl/xl_cmdimpl.c | 191 +++++- tools/libxl/xl_cmdtable.c | 4 +- xen/arch/x86/domctl.c | 49 +- xen/arch/x86/psr.c | 1343 ++++++++++++++++++++++++++++++++------- xen/arch/x86/sysctl.c | 19 +- xen/include/asm-x86/msr-index.h | 1 + xen/include/asm-x86/psr.h | 23 +- xen/include/public/domctl.h | 2 + xen/include/public/sysctl.h | 3 +- 16 files changed, 1433 insertions(+), 325 deletions(-) -- 1.9.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |