[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v9 5/9] xen/x86: allow HVM guests to use hypercalls to bring up vCPUs
Allow the usage of the VCPUOP_initialise, VCPUOP_up, VCPUOP_down and VCPUOP_is_up hypercalls from HVM guests. This patch introduces a new structure (vcpu_hvm_context) that should be used in conjuction with the VCPUOP_initialise hypercall in order to initialize vCPUs for HVM guests. Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx> Cc: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx> --- Changes since v8: - Create a generic default_initalize_vcpu function that can be shared with x86 PV guests and ARM guests. - Pass check_segment register by reference. - Check that code/data segments always have the 'S' attribute bit set. - Fix some of the checks done in check_segment. - Fix indentation of gprintk calls. - Remove failure messages in case the padding fields are not zero. - Make hvm_efer_valid and hvm_cr4_guest_reserved_bits global and use them to check the input values provided by the user. - Only check the attribute field in order to figure out if a segment is null. - Make sure the TR segment is a system segment. Changes since v7: - Improved error messages. - Set EFER.LMA if EFER.LME is set by the user. - Fix calculation of CS limit. - Add more checks to segment registers. - Add checks to make sure padding fields are 0. - Remove ugly arch ifdefs from common domain.c. - Add the implicit padding of vcpu_hvm_x86_32 explicitly in the structure. - Simplify the compat vcpu code since it's only used on x86. Changes since v6: - Add comments to clarify some initializations. - Introduce a generic default_initialize_vcpu that's used to initialize a ARM vCPU or a x86 PV vCPU. - Move the undef of the SEG macro. - Fix the size of the eflags register, it should be 32bits. - Add a comment regarding the value of the 12-15 bits of the _ar fields. - Remove the 16bit strucutre, the 32bit one can be used to start the cpu in real mode. - Add some sanity checks to the values passed in. - Add paddings to vcpu_hvm_context so the layout on 32/64bits is the same. - Add support for the compat version of VCPUOP_initialise. Changes since v5: - Fix a coding style issue. - Merge the code from wip-dmlite-v5-refactor by Andrew in order to reduce bloat. - Print the offending %cr3 in case of error when using shadow. - Reduce the scope of local variables in arch_initialize_vcpu. - s/current->domain/v->domain/g in arch_initialize_vcpu. - Expand the comment in public/vcpu.h to document the usage of vcpu_hvm_context for HVM guests. - Add myself as the copyright holder for the public hvm_vcpu.h header. Changes since v4: - Don't assume mode is 64B, add an explicit check. - Don't set TF_kernel_mode, it is only needed for PV guests. - Don't set CR0_ET unconditionally. --- xen/arch/arm/domain.c | 5 + xen/arch/x86/domain.c | 314 ++++++++++++++++++++++++++++++++++++++ xen/arch/x86/hvm/hvm.c | 15 +- xen/common/compat/domain.c | 50 ++++-- xen/common/domain.c | 41 +++-- xen/include/Makefile | 1 + xen/include/asm-x86/domain.h | 3 + xen/include/asm-x86/hvm/hvm.h | 5 + xen/include/public/hvm/hvm_vcpu.h | 144 +++++++++++++++++ xen/include/public/vcpu.h | 6 +- xen/include/xen/domain.h | 4 + xen/include/xlat.lst | 3 + 12 files changed, 554 insertions(+), 37 deletions(-) create mode 100644 xen/include/public/hvm/hvm_vcpu.h diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 860ac7d..423eeaf 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -756,6 +756,11 @@ int arch_set_info_guest( return 0; } +int arch_initialize_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + return default_initalize_vcpu(v, arg); +} + int arch_vcpu_reset(struct vcpu *v) { vcpu_end_shutdown_deferral(v); diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index df40dc6..0f1e77c 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -37,6 +37,7 @@ #include <xen/wait.h> #include <xen/guest_access.h> #include <public/sysctl.h> +#include <public/hvm/hvm_vcpu.h> #include <asm/regs.h> #include <asm/mc146818rtc.h> #include <asm/system.h> @@ -1188,6 +1189,319 @@ int arch_set_info_guest( #undef c } +static inline int check_segment(struct segment_register *reg, + enum x86_segment seg) +{ + + if ( reg->attr.fields.pad != 0 ) + { + gprintk(XENLOG_ERR, + "Attribute bits 12-15 of the segment are not zero\n"); + return -EINVAL; + } + + if ( reg->attr.bytes == 0 ) + { + if ( seg != x86_seg_ds && seg != x86_seg_es ) + { + gprintk(XENLOG_ERR, "Null selector provided for CS, SS or TR\n"); + return -EINVAL; + } + return 0; + } + + if ( seg != x86_seg_tr && !reg->attr.fields.s ) + { + gprintk(XENLOG_ERR, + "System segment provided for a code or data segment\n"); + return -EINVAL; + } + + if ( seg == x86_seg_tr && reg->attr.fields.s ) + { + gprintk(XENLOG_ERR, + "Code or data segment provided for TR\n"); + return -EINVAL; + } + + if ( !reg->attr.fields.p ) + { + gprintk(XENLOG_ERR, "Non-present segment provided\n"); + return -EINVAL; + } + + if ( seg == x86_seg_cs && !(reg->attr.fields.type & 0x8) ) + { + gprintk(XENLOG_ERR, "CS is missing code type\n"); + return -EINVAL; + } + + if ( seg == x86_seg_ss && + ((reg->attr.fields.type & 0x8) || !(reg->attr.fields.type & 0x2)) ) + { + gprintk(XENLOG_ERR, + "Unable load a code or non writeable segment into SS\n"); + return -EINVAL; + } + + if ( reg->attr.fields.s && seg != x86_seg_ss && seg != x86_seg_cs && + (reg->attr.fields.type & 0x8) && !(reg->attr.fields.type & 0x2) ) + { + gprintk(XENLOG_ERR, + "Unable to load a non readable code segment into DS or ES\n"); + return -EINVAL; + } + + return 0; +} + +/* Called by VCPUOP_initialise for HVM guests. */ +int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx) +{ + struct cpu_user_regs *uregs = &v->arch.user_regs; + struct segment_register cs, ds, ss, es, tr; + const char *errstr; + int rc; + + if ( ctx->pad != 0 ) + return -EINVAL; + + switch ( ctx->mode ) + { + default: + return -EINVAL; + + case VCPU_HVM_MODE_32B: + { + const struct vcpu_hvm_x86_32 *regs = &ctx->cpu_regs.x86_32; + uint32_t limit; + + if ( ctx->cpu_regs.x86_32.pad1 != 0 || + ctx->cpu_regs.x86_32.pad2[0] != 0 || + ctx->cpu_regs.x86_32.pad2[1] != 0 || + ctx->cpu_regs.x86_32.pad2[2] != 0 ) + return -EINVAL; + +#define SEG(s, r) \ + (struct segment_register){ .sel = 0, .base = (r)->s ## _base, \ + .limit = (r)->s ## _limit, .attr.bytes = (r)->s ## _ar } + cs = SEG(cs, regs); + ds = SEG(ds, regs); + ss = SEG(ss, regs); + es = SEG(es, regs); + tr = SEG(tr, regs); +#undef SEG + + rc = check_segment(&cs, x86_seg_cs); + rc |= check_segment(&ds, x86_seg_ds); + rc |= check_segment(&ss, x86_seg_ss); + rc |= check_segment(&es, x86_seg_es); + rc |= check_segment(&tr, x86_seg_tr); + if ( rc != 0 ) + return rc; + + /* Basic sanity checks. */ + limit = cs.limit; + if ( cs.attr.fields.g ) + limit = (limit << 12) | 0xfff; + if ( regs->eip > limit ) + { + gprintk(XENLOG_ERR, "EIP (%#08x) outside CS limit (%#08x)\n", + regs->eip, limit); + return -EINVAL; + } + + if ( ss.attr.fields.dpl != cs.attr.fields.dpl ) + { + gprintk(XENLOG_ERR, "SS.DPL (%u) is different than CS.DPL (%u)\n", + ss.attr.fields.dpl, cs.attr.fields.dpl); + return -EINVAL; + } + + if ( ds.attr.fields.p && ds.attr.fields.dpl > cs.attr.fields.dpl ) + { + gprintk(XENLOG_ERR, "DS.DPL (%u) is greater than CS.DPL (%u)\n", + ds.attr.fields.dpl, cs.attr.fields.dpl); + return -EINVAL; + } + + if ( es.attr.fields.p && es.attr.fields.dpl > cs.attr.fields.dpl ) + { + gprintk(XENLOG_ERR, "ES.DPL (%u) is greater than CS.DPL (%u)\n", + es.attr.fields.dpl, cs.attr.fields.dpl); + return -EINVAL; + } + + if ( (regs->efer & EFER_LMA) && !(regs->efer & EFER_LME) ) + { + gprintk(XENLOG_ERR, "EFER.LMA set without EFER.LME (%#016lx)\n", + regs->efer); + return -EINVAL; + } + + uregs->rax = regs->eax; + uregs->rcx = regs->ecx; + uregs->rdx = regs->edx; + uregs->rbx = regs->ebx; + uregs->rsp = regs->esp; + uregs->rbp = regs->ebp; + uregs->rsi = regs->esi; + uregs->rdi = regs->edi; + uregs->rip = regs->eip; + uregs->rflags = regs->eflags; + + v->arch.hvm_vcpu.guest_cr[0] = regs->cr0; + v->arch.hvm_vcpu.guest_cr[3] = regs->cr3; + v->arch.hvm_vcpu.guest_cr[4] = regs->cr4; + v->arch.hvm_vcpu.guest_efer = regs->efer; + } + break; + + case VCPU_HVM_MODE_64B: + { + const struct vcpu_hvm_x86_64 *regs = &ctx->cpu_regs.x86_64; + + /* Basic sanity checks. */ + if ( !is_canonical_address(regs->rip) ) + { + gprintk(XENLOG_ERR, "RIP contains a non-canonical address (%#lx)\n", + regs->rip); + return -EINVAL; + } + + if ( !(regs->cr0 & X86_CR0_PG) ) + { + gprintk(XENLOG_ERR, "CR0 doesn't have paging enabled (%#016lx)\n", + regs->cr0); + return -EINVAL; + } + + if ( !(regs->cr4 & X86_CR4_PAE) ) + { + gprintk(XENLOG_ERR, "CR4 doesn't have PAE enabled (%#016lx)\n", + regs->cr4); + return -EINVAL; + } + + if ( !(regs->efer & EFER_LME) ) + { + gprintk(XENLOG_ERR, "EFER doesn't have LME enabled (%#016lx)\n", + regs->efer); + return -EINVAL; + } + + uregs->rax = regs->rax; + uregs->rcx = regs->rcx; + uregs->rdx = regs->rdx; + uregs->rbx = regs->rbx; + uregs->rsp = regs->rsp; + uregs->rbp = regs->rbp; + uregs->rsi = regs->rsi; + uregs->rdi = regs->rdi; + uregs->rip = regs->rip; + uregs->rflags = regs->rflags; + + v->arch.hvm_vcpu.guest_cr[0] = regs->cr0; + v->arch.hvm_vcpu.guest_cr[3] = regs->cr3; + v->arch.hvm_vcpu.guest_cr[4] = regs->cr4; + v->arch.hvm_vcpu.guest_efer = regs->efer; + +#define SEG(b, l, a) \ + (struct segment_register){ .sel = 0, .base = (b), .limit = (l), \ + .attr.bytes = (a) } + cs = SEG(0, ~0u, 0xa9b); /* 64bit code segment. */ + ds = ss = es = SEG(0, ~0u, 0xc93); + tr = SEG(0, 0x67, 0x8b); /* 64bit TSS (busy). */ +#undef SEG + } + break; + + } + + if ( v->arch.hvm_vcpu.guest_efer & EFER_LME ) + v->arch.hvm_vcpu.guest_efer |= EFER_LMA; + + if ( v->arch.hvm_vcpu.guest_cr[4] & hvm_cr4_guest_reserved_bits(v, 1) ) + { + gprintk(XENLOG_ERR, "Bad CR4 value: %#016lx\n", + v->arch.hvm_vcpu.guest_cr[4]); + return -EINVAL; + } + + errstr = hvm_efer_valid(v, v->arch.hvm_vcpu.guest_efer, + MASK_EXTR(v->arch.hvm_vcpu.guest_cr[0], + X86_CR0_PG)); + if ( errstr ) + { + gprintk(XENLOG_ERR, "Bad EFER value (%#016lx): %s\n", + v->arch.hvm_vcpu.guest_efer, errstr); + return -EINVAL; + } + + hvm_update_guest_cr(v, 0); + hvm_update_guest_cr(v, 3); + hvm_update_guest_cr(v, 4); + hvm_update_guest_efer(v); + + if ( hvm_paging_enabled(v) && !paging_mode_hap(v->domain) ) + { + /* Shadow-mode CR3 change. Check PDBR and update refcounts. */ + struct page_info *page = get_page_from_gfn(v->domain, + v->arch.hvm_vcpu.guest_cr[3] >> PAGE_SHIFT, + NULL, P2M_ALLOC); + if ( !page ) + { + gprintk(XENLOG_ERR, "Invalid CR3: %#lx\n", + v->arch.hvm_vcpu.guest_cr[3]); + domain_crash(v->domain); + return -EINVAL; + } + + v->arch.guest_table = pagetable_from_page(page); + } + + hvm_set_segment_register(v, x86_seg_cs, &cs); + hvm_set_segment_register(v, x86_seg_ds, &ds); + hvm_set_segment_register(v, x86_seg_ss, &ss); + hvm_set_segment_register(v, x86_seg_es, &es); + hvm_set_segment_register(v, x86_seg_tr, &tr); + + /* Sync AP's TSC with BSP's. */ + v->arch.hvm_vcpu.cache_tsc_offset = + v->domain->vcpu[0]->arch.hvm_vcpu.cache_tsc_offset; + hvm_funcs.set_tsc_offset(v, v->arch.hvm_vcpu.cache_tsc_offset, + v->domain->arch.hvm_domain.sync_tsc); + + paging_update_paging_modes(v); + + v->is_initialised = 1; + set_bit(_VPF_down, &v->pause_flags); + + return 0; +} + +int arch_initialize_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + int rc; + + if ( is_hvm_vcpu(v) ) + { + struct domain *d = v->domain; + struct vcpu_hvm_context ctxt; + + if ( copy_from_guest(&ctxt, arg, 1) ) + return -EFAULT; + + domain_lock(d); + rc = v->is_initialised ? -EEXIST : arch_set_info_hvm_guest(v, &ctxt); + domain_unlock(d); + } + else + rc = default_initalize_vcpu(v, arg); + + return rc; +} + int arch_vcpu_reset(struct vcpu *v) { if ( is_pv_vcpu(v) ) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index db0aeba..c569f29 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -1833,8 +1833,8 @@ static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h) } /* Return a string indicating the error, or NULL for valid. */ -static const char * hvm_efer_valid(const struct vcpu *v, uint64_t value, - signed int cr0_pg) +const char *hvm_efer_valid(const struct vcpu *v, uint64_t value, + signed int cr0_pg) { unsigned int ext1_ecx = 0, ext1_edx = 0; @@ -1902,8 +1902,7 @@ static const char * hvm_efer_valid(const struct vcpu *v, uint64_t value, X86_CR0_CD | X86_CR0_PG))) /* These bits in CR4 cannot be set by the guest. */ -static unsigned long hvm_cr4_guest_reserved_bits(const struct vcpu *v, - bool_t restore) +unsigned long hvm_cr4_guest_reserved_bits(const struct vcpu *v,bool_t restore) { unsigned int leaf1_ecx = 0, leaf1_edx = 0; unsigned int leaf7_0_ebx = 0, leaf7_0_ecx = 0; @@ -5130,6 +5129,10 @@ static long hvm_vcpu_op( case VCPUOP_stop_singleshot_timer: case VCPUOP_register_vcpu_info: case VCPUOP_register_vcpu_time_memory_area: + case VCPUOP_initialise: + case VCPUOP_up: + case VCPUOP_down: + case VCPUOP_is_up: rc = do_vcpu_op(cmd, vcpuid, arg); break; default: @@ -5188,6 +5191,10 @@ static long hvm_vcpu_op_compat32( case VCPUOP_stop_singleshot_timer: case VCPUOP_register_vcpu_info: case VCPUOP_register_vcpu_time_memory_area: + case VCPUOP_initialise: + case VCPUOP_up: + case VCPUOP_down: + case VCPUOP_is_up: rc = compat_vcpu_op(cmd, vcpuid, arg); break; default: diff --git a/xen/common/compat/domain.c b/xen/common/compat/domain.c index c60d824..88bfdc8 100644 --- a/xen/common/compat/domain.c +++ b/xen/common/compat/domain.c @@ -11,6 +11,7 @@ asm(".file \"" __FILE__ "\""); #include <xen/guest_access.h> #include <xen/hypercall.h> #include <compat/vcpu.h> +#include <compat/hvm/hvm_vcpu.h> #define xen_vcpu_set_periodic_timer vcpu_set_periodic_timer CHECK_vcpu_set_periodic_timer; @@ -24,6 +25,14 @@ CHECK_SIZE_(struct, vcpu_info); CHECK_vcpu_register_vcpu_info; #undef xen_vcpu_register_vcpu_info +#define xen_vcpu_hvm_context vcpu_hvm_context +#define xen_vcpu_hvm_x86_32 vcpu_hvm_x86_32 +#define xen_vcpu_hvm_x86_64 vcpu_hvm_x86_64 +CHECK_vcpu_hvm_context; +#undef xen_vcpu_hvm_x86_64 +#undef xen_vcpu_hvm_x86_32 +#undef xen_vcpu_hvm_context + int compat_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg) { struct domain *d = current->domain; @@ -37,33 +46,44 @@ int compat_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) ar { case VCPUOP_initialise: { - struct compat_vcpu_guest_context *cmp_ctxt; - if ( v->vcpu_info == &dummy_vcpu_info ) return -EINVAL; - if ( (cmp_ctxt = xmalloc(struct compat_vcpu_guest_context)) == NULL ) + if ( is_hvm_vcpu(v) ) { - rc = -ENOMEM; - break; - } + struct vcpu_hvm_context ctxt; - if ( copy_from_guest(cmp_ctxt, arg, 1) ) - { - xfree(cmp_ctxt); - rc = -EFAULT; - break; + if ( copy_from_guest(&ctxt, arg, 1) ) + return -EFAULT; + + domain_lock(d); + rc = v->is_initialised ? -EEXIST : arch_set_info_hvm_guest(v, &ctxt); + domain_unlock(d); } + else + { + struct compat_vcpu_guest_context *ctxt; + + if ( (ctxt = xmalloc(struct compat_vcpu_guest_context)) == NULL ) + return -ENOMEM; + + if ( copy_from_guest(ctxt, arg, 1) ) + { + xfree(ctxt); + return -EFAULT; + } - domain_lock(d); - rc = v->is_initialised ? -EEXIST : arch_set_info_guest(v, cmp_ctxt); - domain_unlock(d); + domain_lock(d); + rc = v->is_initialised ? -EEXIST : arch_set_info_guest(v, ctxt); + domain_unlock(d); + + xfree(ctxt); + } if ( rc == -ERESTART ) rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iuh", cmd, vcpuid, arg); - xfree(cmp_ctxt); break; } diff --git a/xen/common/domain.c b/xen/common/domain.c index f56b7ff..3690eec 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -1208,11 +1208,34 @@ void unmap_vcpu_info(struct vcpu *v) put_page_and_type(mfn_to_page(mfn)); } +int default_initalize_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + struct vcpu_guest_context *ctxt; + struct domain *d = v->domain; + int rc; + + if ( (ctxt = alloc_vcpu_guest_context()) == NULL ) + return -ENOMEM; + + if ( copy_from_guest(ctxt, arg, 1) ) + { + free_vcpu_guest_context(ctxt); + return -EFAULT; + } + + domain_lock(d); + rc = v->is_initialised ? -EEXIST : arch_set_info_guest(v, ctxt); + domain_unlock(d); + + free_vcpu_guest_context(ctxt); + + return rc; +} + long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg) { struct domain *d = current->domain; struct vcpu *v; - struct vcpu_guest_context *ctxt; long rc = 0; if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL ) @@ -1224,21 +1247,7 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg) if ( v->vcpu_info == &dummy_vcpu_info ) return -EINVAL; - if ( (ctxt = alloc_vcpu_guest_context()) == NULL ) - return -ENOMEM; - - if ( copy_from_guest(ctxt, arg, 1) ) - { - free_vcpu_guest_context(ctxt); - return -EFAULT; - } - - domain_lock(d); - rc = v->is_initialised ? -EEXIST : arch_set_info_guest(v, ctxt); - domain_unlock(d); - - free_vcpu_guest_context(ctxt); - + rc = arch_initialize_vcpu(v, arg); if ( rc == -ERESTART ) rc = hypercall_create_continuation(__HYPERVISOR_vcpu_op, "iuh", cmd, vcpuid, arg); diff --git a/xen/include/Makefile b/xen/include/Makefile index c674f7f..94ba3d8 100644 --- a/xen/include/Makefile +++ b/xen/include/Makefile @@ -26,6 +26,7 @@ headers-$(CONFIG_X86) += compat/arch-x86/pmu.h headers-$(CONFIG_X86) += compat/arch-x86/xen-mca.h headers-$(CONFIG_X86) += compat/arch-x86/xen.h headers-$(CONFIG_X86) += compat/arch-x86/xen-$(compat-arch-y).h +headers-$(CONFIG_X86) += compat/hvm/hvm_vcpu.h headers-y += compat/arch-$(compat-arch-y).h compat/pmu.h compat/xlat.h headers-$(FLASK_ENABLE) += compat/xsm/flask_op.h diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h index c825975..e8f7037 100644 --- a/xen/include/asm-x86/domain.h +++ b/xen/include/asm-x86/domain.h @@ -600,6 +600,9 @@ static inline void free_vcpu_guest_context(struct vcpu_guest_context *vgc) vfree(vgc); } +struct vcpu_hvm_context; +int arch_set_info_hvm_guest(struct vcpu *v, const struct vcpu_hvm_context *ctx); + #endif /* __ASM_DOMAIN_H__ */ /* diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h index f80e143..b9d893d 100644 --- a/xen/include/asm-x86/hvm/hvm.h +++ b/xen/include/asm-x86/hvm/hvm.h @@ -560,6 +560,11 @@ void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v); /* emulates #VE */ bool_t altp2m_vcpu_emulate_ve(struct vcpu *v); +/* Check CR4/EFER values */ +const char *hvm_efer_valid(const struct vcpu *v, uint64_t value, + signed int cr0_pg); +unsigned long hvm_cr4_guest_reserved_bits(const struct vcpu *v, bool_t restore); + #endif /* __ASM_X86_HVM_HVM_H__ */ /* diff --git a/xen/include/public/hvm/hvm_vcpu.h b/xen/include/public/hvm/hvm_vcpu.h new file mode 100644 index 0000000..d21abf1 --- /dev/null +++ b/xen/include/public/hvm/hvm_vcpu.h @@ -0,0 +1,144 @@ +/* + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + * + * Copyright (c) 2015, Roger Pau Monne <roger.pau@xxxxxxxxxx> + */ + +#ifndef __XEN_PUBLIC_HVM_HVM_VCPU_H__ +#define __XEN_PUBLIC_HVM_HVM_VCPU_H__ + +#include "../xen.h" + +struct vcpu_hvm_x86_32 { + uint32_t eax; + uint32_t ecx; + uint32_t edx; + uint32_t ebx; + uint32_t esp; + uint32_t ebp; + uint32_t esi; + uint32_t edi; + uint32_t eip; + uint32_t eflags; + + uint32_t cr0; + uint32_t cr3; + uint32_t cr4; + + uint32_t pad1; + + /* + * EFER should only be used to set the NXE bit (if required) + * when starting a vCPU in 32bit mode with paging enabled or + * to set the LME/LMA bits in order to start the vCPU in + * compatibility mode. + */ + uint64_t efer; + + uint32_t cs_base; + uint32_t ds_base; + uint32_t ss_base; + uint32_t es_base; + uint32_t tr_base; + uint32_t cs_limit; + uint32_t ds_limit; + uint32_t ss_limit; + uint32_t es_limit; + uint32_t tr_limit; + uint16_t cs_ar; + uint16_t ds_ar; + uint16_t ss_ar; + uint16_t es_ar; + uint16_t tr_ar; + + uint16_t pad2[3]; +}; + +/* + * The layout of the _ar fields of the segment registers is the + * following: + * + * Bits [0,3]: type (bits 40-43). + * Bit 4: s (descriptor type, bit 44). + * Bit [5,6]: dpl (descriptor privilege level, bits 45-46). + * Bit 7: p (segment-present, bit 47). + * Bit 8: avl (available for system software, bit 52). + * Bit 9: l (64-bit code segment, bit 53). + * Bit 10: db (meaning depends on the segment, bit 54). + * Bit 11: g (granularity, bit 55) + * Bits [12,15]: unused, must be blank. + * + * A more complete description of the meaning of this fields can be + * obtained from the Intel SDM, Volume 3, section 3.4.5. + */ + +struct vcpu_hvm_x86_64 { + uint64_t rax; + uint64_t rcx; + uint64_t rdx; + uint64_t rbx; + uint64_t rsp; + uint64_t rbp; + uint64_t rsi; + uint64_t rdi; + uint64_t rip; + uint64_t rflags; + + uint64_t cr0; + uint64_t cr3; + uint64_t cr4; + uint64_t efer; + + /* + * Using VCPU_HVM_MODE_64B implies that the vCPU is launched + * directly in long mode, so the cached parts of the segment + * registers get set to match that environment. + * + * If the user wants to launch the vCPU in compatibility mode + * the 32-bit structure should be used instead. + */ +}; + +struct vcpu_hvm_context { +#define VCPU_HVM_MODE_32B 0 /* 32bit fields of the structure will be used. */ +#define VCPU_HVM_MODE_64B 1 /* 64bit fields of the structure will be used. */ + uint32_t mode; + + uint32_t pad; + + /* CPU registers. */ + union { + struct vcpu_hvm_x86_32 x86_32; + struct vcpu_hvm_x86_64 x86_64; + } cpu_regs; +}; +typedef struct vcpu_hvm_context vcpu_hvm_context_t; +DEFINE_XEN_GUEST_HANDLE(vcpu_hvm_context_t); + +#endif /* __XEN_PUBLIC_HVM_HVM_VCPU_H__ */ + +/* + * Local variables: + * mode: C + * c-file-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h index 898b89f..692b87a 100644 --- a/xen/include/public/vcpu.h +++ b/xen/include/public/vcpu.h @@ -41,8 +41,10 @@ * Initialise a VCPU. Each VCPU can be initialised only once. A * newly-initialised VCPU will not run until it is brought up by VCPUOP_up. * - * @extra_arg == pointer to vcpu_guest_context structure containing initial - * state for the VCPU. + * @extra_arg == For PV or ARM guests this is a pointer to a vcpu_guest_context + * structure containing the initial state for the VCPU. For x86 + * HVM based guests this is a pointer to a vcpu_hvm_context + * structure. */ #define VCPUOP_initialise 0 diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 3dca3f6..269bbe7 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -64,6 +64,8 @@ int arch_domain_soft_reset(struct domain *d); int arch_set_info_guest(struct vcpu *, vcpu_guest_context_u); void arch_get_info_guest(struct vcpu *, vcpu_guest_context_u); +int arch_initialize_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg); + int domain_relinquish_resources(struct domain *d); void dump_pageframe_info(struct domain *d); @@ -103,4 +105,6 @@ struct vnuma_info { void vnuma_destroy(struct vnuma_info *vnuma); +int default_initalize_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg); + #endif /* __XEN_DOMAIN_H__ */ diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst index 3795059..fda1137 100644 --- a/xen/include/xlat.lst +++ b/xen/include/xlat.lst @@ -56,6 +56,9 @@ ? grant_entry_header grant_table.h ? grant_entry_v2 grant_table.h ? gnttab_swap_grant_ref grant_table.h +? vcpu_hvm_context hvm/hvm_vcpu.h +? vcpu_hvm_x86_32 hvm/hvm_vcpu.h +? vcpu_hvm_x86_64 hvm/hvm_vcpu.h ? kexec_exec kexec.h ! kexec_image kexec.h ! kexec_range kexec.h -- 1.9.5 (Apple Git-50.3) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |