[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2 3/3] tools: introduce parameter max_wp_ram_ranges.
A new parameter - max_wp_ram_ranges is added to set the upper limit of write-protected ram ranges to be tracked inside one ioreq server rangeset. Ioreq server uses a group of rangesets to track the I/O or memory resources to be emulated. Default limit of ranges that one rangeset can allocate is set to a small value, due to the fact that these ranges are allocated in xen heap. Yet for the write-protected ram ranges, there are circumstances under which the upper limit inside one rangeset should exceed the default one. E.g. in XenGT, when tracking the per-process graphic translation tables on intel broadwell platforms, the number of page tables concerned will be several thousand(normally in this case, 8192 could be a big enough value). Users who set this item explicitly are supposed to know the specific scenarios that necessitate this configuration. Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx> --- docs/man/xl.cfg.pod.5 | 18 ++++++++++++++++++ tools/libxl/libxl.h | 5 +++++ tools/libxl/libxl_dom.c | 3 +++ tools/libxl/libxl_types.idl | 1 + tools/libxl/xl_cmdimpl.c | 4 ++++ xen/arch/x86/hvm/hvm.c | 10 +++++++++- xen/include/public/hvm/params.h | 5 ++++- 7 files changed, 44 insertions(+), 2 deletions(-) diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5 index 8899f75..7634c42 100644 --- a/docs/man/xl.cfg.pod.5 +++ b/docs/man/xl.cfg.pod.5 @@ -962,6 +962,24 @@ FIFO-based event channel ABI support up to 131,071 event channels. Other guests are limited to 4095 (64-bit x86 and ARM) or 1023 (32-bit x86). +=item B<max_wp_ram_ranges=N> + +Limit the maximum write-protected ram ranges that can be tracked +inside one ioreq server rangeset. + +Ioreq server uses a group of rangesets to track the I/O or memory +resources to be emulated. Default limit of ranges that one rangeset +can allocate is set to a small value, due to the fact that these ranges +are allocated in xen heap. Yet for the write-protected ram ranges, +there are circumstances under which the upper limit inside one rangeset +should exceed the default one. E.g. in XenGT, when tracking the per- +process graphic translation tables on intel broadwell platforms, the +number of page tables concerned will be several thousand(normally +in this case, 8192 could be a big enough value). Not configuring this +item, or setting its value to 0 will result in the upper limit set +to its default one. Users who set his item explicitly are supposed +to know the specific scenarios that necessitate this configuration. + =back =head2 Paravirtualised (PV) Guest Specific Options diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h index 156c0d5..6698d72 100644 --- a/tools/libxl/libxl.h +++ b/tools/libxl/libxl.h @@ -136,6 +136,11 @@ #define LIBXL_HAVE_BUILDINFO_EVENT_CHANNELS 1 /* + * libxl_domain_build_info has the u.hvm.max_wp_ram_ranges field. + */ +#define LIBXL_HAVE_BUILDINFO_HVM_MAX_WP_RAM_RANGES 1 + +/* * libxl_domain_build_info has the u.hvm.ms_vm_genid field. */ #define LIBXL_HAVE_BUILDINFO_HVM_MS_VM_GENID 1 diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c index 2269998..54173cb 100644 --- a/tools/libxl/libxl_dom.c +++ b/tools/libxl/libxl_dom.c @@ -288,6 +288,9 @@ static void hvm_set_conf_params(xc_interface *handle, uint32_t domid, libxl_defbool_val(info->u.hvm.nested_hvm)); xc_hvm_param_set(handle, domid, HVM_PARAM_ALTP2M, libxl_defbool_val(info->u.hvm.altp2m)); + if (info->u.hvm.max_wp_ram_ranges > 0) + xc_hvm_param_set(handle, domid, HVM_PARAM_MAX_WP_RAM_RANGES, + info->u.hvm.max_wp_ram_ranges); } int libxl__build_pre(libxl__gc *gc, uint32_t domid, diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl index 9ad7eba..c7d7b5f 100644 --- a/tools/libxl/libxl_types.idl +++ b/tools/libxl/libxl_types.idl @@ -518,6 +518,7 @@ libxl_domain_build_info = Struct("domain_build_info",[ ("serial_list", libxl_string_list), ("rdm", libxl_rdm_reserve), ("rdm_mem_boundary_memkb", MemKB), + ("max_wp_ram_ranges", uint32), ])), ("pv", Struct(None, [("kernel", string), ("slack_memkb", MemKB), diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c index 25507c7..8bb7cc7 100644 --- a/tools/libxl/xl_cmdimpl.c +++ b/tools/libxl/xl_cmdimpl.c @@ -1626,6 +1626,10 @@ static void parse_config_data(const char *config_source, if (!xlu_cfg_get_long (config, "rdm_mem_boundary", &l, 0)) b_info->u.hvm.rdm_mem_boundary_memkb = l * 1024; + + if (!xlu_cfg_get_long (config, "max_wp_ram_ranges", &l, 0)) + b_info->u.hvm.max_wp_ram_ranges = l; + break; case LIBXL_DOMAIN_TYPE_PV: { diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 53d38e7..fd2b697 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -940,6 +940,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, { unsigned int i; int rc; + unsigned int max_wp_ram_ranges = + ( s->domain->arch.hvm_domain.params[HVM_PARAM_MAX_WP_RAM_RANGES] > 0 ) ? + s->domain->arch.hvm_domain.params[HVM_PARAM_MAX_WP_RAM_RANGES] : + MAX_NR_IO_RANGES; if ( is_default ) goto done; @@ -962,7 +966,10 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, if ( !s->range[i] ) goto fail; - rangeset_limit(s->range[i], MAX_NR_IO_RANGES); + if ( i == HVMOP_IO_RANGE_WP_MEM ) + rangeset_limit(s->range[i], max_wp_ram_ranges); + else + rangeset_limit(s->range[i], MAX_NR_IO_RANGES); } done: @@ -6009,6 +6016,7 @@ static int hvm_allow_set_param(struct domain *d, case HVM_PARAM_IOREQ_SERVER_PFN: case HVM_PARAM_NR_IOREQ_SERVER_PAGES: case HVM_PARAM_ALTP2M: + case HVM_PARAM_MAX_WP_RAM_RANGES: if ( value != 0 && a->value != value ) rc = -EEXIST; break; diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h index 81f9451..ab3b11d 100644 --- a/xen/include/public/hvm/params.h +++ b/xen/include/public/hvm/params.h @@ -210,6 +210,9 @@ /* Boolean: Enable altp2m */ #define HVM_PARAM_ALTP2M 35 -#define HVM_NR_PARAMS 36 +/* Max write-protected ram ranges to be tracked in one ioreq server rangeset */ +#define HVM_PARAM_MAX_WP_RAM_RANGES 36 + +#define HVM_NR_PARAMS 37 #endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */ -- 1.9.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |