[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 10/15] xen: vmx: handle ENCLS VMEXIT



Currently EPC are statically allocated and mapped to guest, we don't have
to trap ENCLS as it runs perfectly in VMX non-root mode. But exposing SGX
to guest means we also expose ENABLE_ENCLS bit to L1 hypervisor, therefore
we cannot stop L1 from enabling ENCLS VMEXIT. For ENCLS VMEXIT from L2 guest,
we simply inject it to L1, otherwise the ENCLS VMEXIT is unexpected in L0
and we simply crash the domain.

Signed-off-by: Kai Huang <kai.huang@xxxxxxxxxxxxxxx>
---
 xen/arch/x86/hvm/vmx/vmx.c         | 10 ++++++++++
 xen/arch/x86/hvm/vmx/vvmx.c        | 11 +++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h |  1 +
 xen/include/asm-x86/hvm/vmx/vmx.h  |  1 +
 4 files changed, 23 insertions(+)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 7ee5515bdc..ea3d468bb0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4126,6 +4126,16 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
         vmx_handle_apic_write();
         break;
 
+    case EXIT_REASON_ENCLS:
+        /*
+         * Currently L0 doesn't turn on ENCLS VMEXIT, but L0 cannot stop L1
+         * from enabling ENCLS VMEXIT. ENCLS VMEXIT from L2 guest has already
+         * been handled so by reaching here it is a BUG. We simply crash the
+         * domain.
+         */
+        domain_crash(v->domain);
+        break;
+
     case EXIT_REASON_PML_FULL:
         vmx_vcpu_flush_pml_buffer(v);
         break;
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 3560faec6d..7eb10738d9 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -2059,6 +2059,12 @@ int nvmx_msr_read_intercept(unsigned int msr, u64 
*msr_content)
                SECONDARY_EXEC_ENABLE_VPID |
                SECONDARY_EXEC_UNRESTRICTED_GUEST |
                SECONDARY_EXEC_ENABLE_EPT;
+        /*
+         * If SGX is exposed to guest, then ENABLE_ENCLS bit must also be
+         * exposed to guest.
+         */
+        if ( domain_has_sgx(d) )
+            data |= SECONDARY_EXEC_ENABLE_ENCLS;
         data = gen_vmx_msr(data, 0, host_data);
         break;
     case MSR_IA32_VMX_EXIT_CTLS:
@@ -2291,6 +2297,11 @@ int nvmx_n2_vmexit_handler(struct cpu_user_regs *regs,
     case EXIT_REASON_VMXON:
     case EXIT_REASON_INVEPT:
     case EXIT_REASON_XSETBV:
+    /*
+     * L0 doesn't turn on ENCLS VMEXIT now, so ENCLS VMEXIT must come from
+     * L2 guest, and is because of ENCLS VMEXIT is turned on by L1.
+     */
+    case EXIT_REASON_ENCLS:
         /* inject to L1 */
         nvcpu->nv_vmexit_pending = 1;
         break;
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h 
b/xen/include/asm-x86/hvm/vmx/vmcs.h
index fc0b9d85fd..1350b7bc81 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -396,6 +396,7 @@ enum vmcs_field {
     VIRT_EXCEPTION_INFO             = 0x0000202a,
     XSS_EXIT_BITMAP                 = 0x0000202c,
     TSC_MULTIPLIER                  = 0x00002032,
+    ENCLS_EXITING_BITMAP            = 0x0000202E,
     GUEST_PHYSICAL_ADDRESS          = 0x00002400,
     VMCS_LINK_POINTER               = 0x00002800,
     GUEST_IA32_DEBUGCTL             = 0x00002802,
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h 
b/xen/include/asm-x86/hvm/vmx/vmx.h
index 4889a64255..211f5c8058 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -210,6 +210,7 @@ static inline void pi_clear_sn(struct pi_desc *pi_desc)
 #define EXIT_REASON_APIC_WRITE          56
 #define EXIT_REASON_INVPCID             58
 #define EXIT_REASON_VMFUNC              59
+#define EXIT_REASON_ENCLS               60
 #define EXIT_REASON_PML_FULL            62
 #define EXIT_REASON_XSAVES              63
 #define EXIT_REASON_XRSTORS             64
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.