[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/HVM: refine SMEP/SMAP tests in HVM_CR4_GUEST_RESERVED_BITS()


  • To: Jan Beulich <JBeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Xen-devel List <xen-devel@xxxxxxxxxxxxx>
  • From: "Wu, Feng" <feng.wu@xxxxxxxxx>
  • Date: Fri, 6 Jun 2014 01:20:07 +0000
  • Accept-language: en-US
  • Cc: Keir Fraser <keir@xxxxxxx>
  • Delivery-date: Fri, 06 Jun 2014 01:21:42 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHPgLWI66P6ejKAbUqCj/VPK2d2kptjSRLA
  • Thread-topic: [PATCH] x86/HVM: refine SMEP/SMAP tests in HVM_CR4_GUEST_RESERVED_BITS()

Thanks Andrew for pointing this out and thanks a lot for Jan's fix!

Thanks,
Feng

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: Thursday, June 05, 2014 7:59 PM
> To: Andrew Cooper; Xen-devel List
> Cc: Wu, Feng; Keir Fraser
> Subject: [PATCH] x86/HVM: refine SMEP/SMAP tests in
> HVM_CR4_GUEST_RESERVED_BITS()
> 
> Andrew validly points out that the use of the macro on the restore path
> can't rely on the CPUID bits for the guest already being in place (as
> their setting by the tool stack in turn requires the other restore
> operations already having taken place). And even worse, using
> hvm_cpuid() is invalid here because that function assumes to be used in
> the context of the vCPU in question.
> 
> Reverting to the behavior prior to the change from checking
> cpu_has_sm?p to hvm_vcpu_has_sm?p() would break the other (non-restore)
> use of the macro. So let's revert to the prior behavior only for the
> restore path, by adding a respective second parameter to the macro.
> 
> Obviously the two cpu_has_* uses in the macro should really also be
> converted to hvm_cpuid() based checks at least for the non-restore
> path.
> 
> Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1754,7 +1754,7 @@ static int hvm_load_cpu_ctxt(struct doma
>          return -EINVAL;
>      }
> 
> -    if ( ctxt.cr4 & HVM_CR4_GUEST_RESERVED_BITS(v) )
> +    if ( ctxt.cr4 & HVM_CR4_GUEST_RESERVED_BITS(v, 1) )
>      {
>          printk(XENLOG_G_ERR "HVM%d restore: bad CR4 %#" PRIx64 "\n",
>                 d->domain_id, ctxt.cr4);
> @@ -3186,7 +3186,7 @@ int hvm_set_cr4(unsigned long value)
>      struct vcpu *v = current;
>      unsigned long old_cr;
> 
> -    if ( value & HVM_CR4_GUEST_RESERVED_BITS(v) )
> +    if ( value & HVM_CR4_GUEST_RESERVED_BITS(v, 0) )
>      {
>          HVM_DBG_LOG(DBG_LEVEL_1,
>                      "Guest attempts to set reserved bit in CR4: %lx",
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -406,19 +406,27 @@ static inline bool_t hvm_vcpu_has_smep(v
>      (X86_CR4_VMXE | X86_CR4_PAE | X86_CR4_MCE))
> 
>  /* These bits in CR4 cannot be set by the guest. */
> -#define HVM_CR4_GUEST_RESERVED_BITS(_v)                 \
> +#define HVM_CR4_GUEST_RESERVED_BITS(v, restore) ({      \
> +    const struct vcpu *_v = (v);                        \
> +    bool_t _restore = !!(restore);                      \
> +    ASSERT((_restore) || _v == current);                \
>      (~((unsigned long)                                  \
>         (X86_CR4_VME | X86_CR4_PVI | X86_CR4_TSD |       \
>          X86_CR4_DE  | X86_CR4_PSE | X86_CR4_PAE |       \
>          X86_CR4_MCE | X86_CR4_PGE | X86_CR4_PCE |       \
>          X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT |           \
> -        (hvm_vcpu_has_smep() ? X86_CR4_SMEP : 0) |      \
> -        (hvm_vcpu_has_smap() ? X86_CR4_SMAP : 0) |      \
> +        (((_restore) ? cpu_has_smep :                   \
> +                       hvm_vcpu_has_smep()) ?           \
> +         X86_CR4_SMEP : 0) |                            \
> +        (((_restore) ? cpu_has_smap :                   \
> +                       hvm_vcpu_has_smap()) ?           \
> +         X86_CR4_SMAP : 0) |                            \
>          (cpu_has_fsgsbase ? X86_CR4_FSGSBASE : 0) |     \
> -        ((nestedhvm_enabled((_v)->domain) && cpu_has_vmx)\
> -                      ? X86_CR4_VMXE : 0)  |             \
> -        (cpu_has_pcid ? X86_CR4_PCIDE : 0) |             \
> -        (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))))
> +        ((nestedhvm_enabled(_v->domain) && cpu_has_vmx) \
> +                      ? X86_CR4_VMXE : 0)  |            \
> +        (cpu_has_pcid ? X86_CR4_PCIDE : 0) |            \
> +        (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))));       \
> +})
> 
>  /* These exceptions must always be intercepted. */
>  #define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U <<
> TRAP_invalid_op))
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.