|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC v13 06/20] pvh: vmx-specific changes
>>> On 23.09.13 at 18:49, George Dunlap <george.dunlap@xxxxxxxxxxxxx> wrote:
> Changes:
> * Enforce HAP mode for now
> * Disable exits related to virtual interrupts or emulated APICs
> * Disable changing paging mode
> - "unrestricted guest" (i.e., real mode for EPT) disabled
> - write guest EFER disabled
> * Start in 64-bit mode
> * Force TSC mode to be "none"
> * Paging mode update to happen in arch_set_info_guest
>
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> Signed-off-by: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
> ---
> v13:
> - Fix up default cr0 settings
> - Get rid of some unnecessary PVH-related changes
> - Return EOPNOTSUPP instead of ENOSYS if hardware features are not present
> - Remove an unnecessary variable from pvh_check_requirements
> CC: Jan Beulich <jbeulich@xxxxxxxx>
> CC: Tim Deegan <tim@xxxxxxx>
> CC: Keir Fraser <keir@xxxxxxx>
> ---
> xen/arch/x86/hvm/vmx/vmcs.c | 130
> +++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 126 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index cf54d18..53fccdf 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -828,6 +828,60 @@ void virtual_vmcs_vmwrite(void *vvmcs, u32
> vmcs_encoding, u64 val)
> virtual_vmcs_exit(vvmcs);
> }
>
> +static int pvh_check_requirements(struct vcpu *v)
> +{
> + u64 required;
> +
> + /* Check for required hardware features */
> + if ( !cpu_has_vmx_ept )
> + {
> + printk(XENLOG_G_INFO "PVH: CPU does not have EPT support\n");
> + return -EOPNOTSUPP;
> + }
> + if ( !cpu_has_vmx_pat )
> + {
> + printk(XENLOG_G_INFO "PVH: CPU does not have PAT support\n");
> + return -EOPNOTSUPP;
> + }
> + if ( !cpu_has_vmx_msr_bitmap )
> + {
> + printk(XENLOG_G_INFO "PVH: CPU does not have msr bitmap\n");
> + return -EOPNOTSUPP;
> + }
> + if ( !cpu_has_vmx_secondary_exec_control )
> + {
> + printk(XENLOG_G_INFO "CPU Secondary exec is required to run PVH\n");
> + return -EOPNOTSUPP;
> + }
Up to here the checks are VMX specific, and hence belong in a VMX
specific file, ...
> + required = X86_CR4_PAE | X86_CR4_VMXE | X86_CR4_OSFXSR;
> + if ( (real_cr4_to_pv_guest_cr4(mmu_cr4_features) & required) != required
> )
> + {
> + printk(XENLOG_G_INFO "PVH: required CR4 features not
> available:%lx\n",
> + required);
> + return -EOPNOTSUPP;
> + }
> +
> + /* Check for required configuration options */
> + if ( !paging_mode_hap(v->domain) )
> + {
> + printk(XENLOG_G_INFO "HAP is required for PVH guest.\n");
> + return -EINVAL;
> + }
> + /*
> + * If rdtsc exiting is turned on and it goes thru emulate_privileged_op,
> + * then pv_vcpu.ctrlreg must be added to the pvh struct.
> + */
> + if ( v->domain->arch.vtsc )
> + {
> + printk(XENLOG_G_INFO
> + "At present PVH only supports the default timer mode\n");
> + return -EINVAL;
> + }
... but all of these are pretty generic (apart from the X86_CR4_VMXE
in the CR4 mask checked above, but I wonder whether that
shouldn't be checked much earlier - for HVM guests no such check
exists here afaik).
> @@ -874,7 +935,32 @@ static int construct_vmcs(struct vcpu *v)
> /* Do not enable Monitor Trap Flag unless start single step debug */
> v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MONITOR_TRAP_FLAG;
>
> + if ( is_pvh_domain(d) )
> + {
> + /* Disable virtual apics, TPR */
> + v->arch.hvm_vmx.secondary_exec_control &=
> + ~(SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES
> + | SECONDARY_EXEC_APIC_REGISTER_VIRT
> + | SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
> + v->arch.hvm_vmx.exec_control &= ~CPU_BASED_TPR_SHADOW;
> +
> + /* Disable wbinvd (only necessary for MMIO),
> + * unrestricted guest (real mode for EPT) */
To not confuse the reader (I got confused the last time through,
and now again) this should say "Disable wbinvd exiting ...".
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |