|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] x86/VMX: fix vmentry failure with TSX bits in LBR
>>> On 25.01.17 at 18:26, <sergey.dyasli@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2241,6 +2241,25 @@ static void pi_notification_interrupt(struct
> cpu_user_regs *regs)
> raise_softirq(VCPU_KICK_SOFTIRQ);
> }
>
> +static bool __read_mostly vmx_lbr_tsx_fixup_needed;
> +
> +static void __init vmx_lbr_tsx_fixup_check(void)
> +{
> + bool tsx_support = cpu_has_hle || cpu_has_rtm;
> + u64 caps;
> + u32 lbr_format;
> +
> + if ( !cpu_has_pdcm )
> + return;
> +
> + rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
> + /* Lower 6 bits define the format of the address in the LBR stack */
> + lbr_format = caps & 0x3f;
Please add a #define for the constant here and ...
> + /* 000100B -- TSX info is reported in the upper bits of 'FROM' registers
> */
> + vmx_lbr_tsx_fixup_needed = !tsx_support && lbr_format == 0x4;
... an enum for the known values (to be used here).
> @@ -2833,7 +2854,11 @@ static int vmx_msr_write_intercept(unsigned int msr,
> uint64_t msr_content)
> for ( ; (rc == 0) && lbr->count; lbr++ )
> for ( i = 0; (rc == 0) && (i < lbr->count); i++ )
> if ( (rc = vmx_add_guest_msr(lbr->base + i)) == 0 )
> + {
> vmx_disable_intercept_for_msr(v, lbr->base + i,
> MSR_TYPE_R | MSR_TYPE_W);
> + if ( vmx_lbr_tsx_fixup_needed )
> + v->arch.hvm_vmx.lbr_tsx_fixup_enabled = true;
Why not simply
vmx_lbr_tsx_fixup_needed =
v->arch.hvm_vmx.lbr_tsx_fixup_enabled;
?
> @@ -3854,6 +3879,46 @@ out:
> }
> }
>
> +static void vmx_lbr_tsx_fixup(void)
> +{
> + static const u64 LBR_FROM_SIGNEXT_2MSB = (1ULL << 59) | (1ULL << 60);
3ULL << 59 would likely be a little less strange while reading, without
having reached the point of use yet. I'm not convinced the use of a
static const variable here is warranted anyway, the more that the
identifier is a #define one anyway (by being all upper case). Yet if
you really want to keep it, uint64_t please (and likewise elsewhere).
> + static u32 lbr_from_start = 0, lbr_from_end = 0, last_in_from_ip = 0;
> +
No blank line in the middle of declarations please. And did you
perhaps mean "last_int_from_ip"?
> + const struct lbr_info *lbr;
> + const struct vcpu *curr = current;
> + const unsigned int msr_count = curr->arch.hvm_vmx.msr_count;
> + const const struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
> + struct vmx_msr_entry *msr;
> +
> + if ( lbr_from_start == ~0U )
> + return;
> +
> + if ( unlikely(lbr_from_start == 0) )
> + {
> + lbr = last_branch_msr_get();
> + if ( lbr == NULL )
> + {
> + lbr_from_start = ~0U;
> + return;
> + }
> +
> + /* Warning: this depends on struct lbr_info[] layout! */
Please make suitable #define-s for them then, placed next to the
array definition.
> + last_in_from_ip = lbr[0].base;
> + lbr_from_start = lbr[3].base;
> + lbr_from_end = lbr_from_start + lbr[3].count;
There's a race here: lbr_from_start needs to be written strictly
last, for the static variable latching to work in all cases.
> + }
> +
> + if ( (msr = vmx_find_guest_msr(lbr_from_start)) != NULL )
> + {
> + /* Sign extend into bits 61:62 while preserving bit 63 */
> + for ( ; msr < msr_area + msr_count && msr->index < lbr_from_end;
> msr++ )
> + msr->data |= ((LBR_FROM_SIGNEXT_2MSB & msr->data) << 2);
Please also clarify in the comment that this depends on the array
being sorted at all times.
> --- a/xen/include/asm-x86/cpufeature.h
> +++ b/xen/include/asm-x86/cpufeature.h
> @@ -78,6 +78,9 @@
> #define cpu_has_cmp_legacy boot_cpu_has(X86_FEATURE_CMP_LEGACY)
> #define cpu_has_tbm boot_cpu_has(X86_FEATURE_TBM)
> #define cpu_has_itsc boot_cpu_has(X86_FEATURE_ITSC)
> +#define cpu_has_hle boot_cpu_has(X86_FEATURE_HLE)
> +#define cpu_has_rtm boot_cpu_has(X86_FEATURE_RTM)
> +#define cpu_has_pdcm boot_cpu_has(X86_FEATURE_PDCM)
Hard tabs are to be used here, as suggested by the neighboring
entries.
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -231,6 +231,8 @@ struct arch_vmx_struct {
> * pCPU and wakeup the related vCPU.
> */
> struct pi_blocking_vcpu pi_blocking;
> +
> + bool lbr_tsx_fixup_enabled;
Please fit this into a currently unused byte.
Having reached here - migration doesn't need any similar
adjustment simply because we don't migrate these MSR values
(yet)? This may be worth stating in the commit message.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |