[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH for-4.21??? 3/3] x86/vLAPIC: properly support the CMCI LVT
Rather than unconditionally accepting reads and writes while discarding the value written, make accesses properly conditional upon CMCI being exposed via MCG_CAP, and arrange to actually retain the value written. Also reflect the extra LVT in LVR. Note that this doesn't change the status quo of us never delivering any interrupt through this LVT. Fixes: 70173dbb9948 ("x86/HVM: fix miscellaneous aspects of x2APIC emulation") Fixes: 8d0a20587e4e ("x86/hvm: further restrict access to x2apic MSRs") Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> --- The Fixes: tags are referencing where the explicit mentioning of APIC_CMCI in what are now guest_{rd,wr}msr_x2apic() was introduced; the mis-handling really pre-dates that, though. In principle the later assignment to "nr" in vlapic_do_init() could now be dropped again. I wasn't quite sure though whether that's a good idea. --- a/xen/arch/x86/hvm/vlapic.c +++ b/xen/arch/x86/hvm/vlapic.c @@ -31,10 +31,13 @@ #include <public/hvm/ioreq.h> #include <public/hvm/params.h> -#define LVT_BIAS(reg) (((reg) - APIC_LVTT) >> 4) +#include <../cpu/mcheck/x86_mca.h> /* MCG_CMCI_P */ + +#define LVT_BIAS(reg) (((reg) - APIC_CMCI) >> 4) #define LVTS \ - LVT(LVTT), LVT(LVTTHMR), LVT(LVTPC), LVT(LVT0), LVT(LVT1), LVT(LVTERR), + LVT(LVTT), LVT(LVTTHMR), LVT(LVTPC), LVT(LVT0), LVT(LVT1), LVT(LVTERR), \ + LVT(CMCI), static const unsigned int lvt_reg[] = { #define LVT(which) APIC_ ## which @@ -57,6 +60,7 @@ static const unsigned int lvt_valid[] = #define LVT0_VALID LINT_MASK #define LVT1_VALID LINT_MASK #define LVTERR_VALID LVT_MASK +#define CMCI_VALID (LVT_MASK | APIC_DM_MASK) #define LVT(which) [LVT_BIAS(APIC_ ## which)] = which ## _VALID LVTS #undef LVT @@ -697,8 +701,17 @@ int guest_rdmsr_x2apic(const struct vcpu return X86EMUL_EXCEPTION; offset = reg << 4; - if ( offset == APIC_ICR ) + switch ( offset ) + { + case APIC_ICR: high = (uint64_t)vlapic_read_aligned(vlapic, APIC_ICR2) << 32; + break; + + case APIC_CMCI: + if ( !(v->arch.vmce.mcg_cap & MCG_CMCI_P) ) + return X86EMUL_EXCEPTION; + break; + } *val = high | vlapic_read_aligned(vlapic, offset); @@ -868,6 +881,10 @@ void vlapic_reg_write(struct vcpu *v, un vlapic_set_reg(vlapic, APIC_ICR2, val & 0xff000000U); break; + case APIC_CMCI: /* LVT CMCI */ + if ( !(v->arch.vmce.mcg_cap & MCG_CMCI_P) ) + break; + fallthrough; case APIC_LVTT: /* LVT Timer Reg */ if ( vlapic_lvtt_tdt(vlapic) != ((val & APIC_TIMER_MODE_MASK) == APIC_TIMER_MODE_TSC_DEADLINE) ) @@ -1024,9 +1041,12 @@ int guest_wrmsr_x2apic(struct vcpu *v, u return X86EMUL_EXCEPTION; break; + case APIC_CMCI: + if ( !(v->arch.vmce.mcg_cap & MCG_CMCI_P) ) + return X86EMUL_EXCEPTION; + fallthrough; case APIC_LVTTHMR: case APIC_LVTPC: - case APIC_CMCI: if ( val & ~(LVT_MASK | APIC_DM_MASK) ) return X86EMUL_EXCEPTION; break; @@ -1438,7 +1458,9 @@ static void vlapic_do_init(struct vlapic if ( !has_vlapic(vlapic_vcpu(vlapic)->domain) ) return; - vlapic_set_reg(vlapic, APIC_LVR, 0x00050014); + nr = 6 + !!(vlapic_vcpu(vlapic)->arch.vmce.mcg_cap & MCG_CMCI_P); + vlapic_set_reg(vlapic, APIC_LVR, + 0x00000014 | MASK_INSR(nr - 1, APIC_LVR_MAXLVT_MASK)); for ( i = 0; i < 8; i++ ) { --- a/xen/arch/x86/include/asm/apicdef.h +++ b/xen/arch/x86/include/asm/apicdef.h @@ -15,7 +15,10 @@ #define GET_xAPIC_ID(x) (((x)>>24)&0xFFu) #define SET_xAPIC_ID(x) (((x)<<24)) #define APIC_LVR 0x30 -#define APIC_LVR_MASK 0xFF00FF +#define APIC_LVR_VERSION_MASK 0xff +#define APIC_LVR_MAXLVT_MASK 0xff0000 +#define APIC_LVR_MASK (APIC_LVR_VERSION_MASK | \ + APIC_LVR_MAXLVT_MASK) #define APIC_LVR_DIRECTED_EOI (1 << 24) #define GET_APIC_VERSION(x) ((x)&0xFF) #define GET_APIC_MAXLVT(x) (((x)>>16)&0xFF)
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |