|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 00/18] Provide a command line option to choose how to handle SErrors
From XSA-201 (see [1]), we know that, a guest could trigger SErrors when
accessing memory mapped HW in a non-conventional way. In the patches for
XSA-201, we crash the guest when we captured such asynchronous aborts to
avoid data corruption.
In order to distinguish guest-generated SErrors from hypervisor-generated
SErrors. We have to place SError checking code in every EL1 -> EL2 paths.
That will be an overhead on entries caused by dsb/isb.
But not all platforms want to categorize the SErrors. For example, a host
that is running with trusted guests. The administrator can confirm that
all guests that are running on the host will not trigger such SErrors. In
this user scene, we should provide some options to administrator to avoid
categorizing the SErrors. And then reduce the overhead of dsb/isb.
We provided following 3 options to administrator to determine how to handle
the SErrors:
* `diverse`:
The hypervisor will distinguish guest SErrors from hypervisor SErrors.
The guest generated SErrors will be forwarded to guests, the hypervisor
generated SErrors will cause the whole system crash.
It requires:
1. Place dsb/isb on all EL1 -> EL2 trap entries to categorize SErrors
correctly.
2. Place dsb/isb on EL2 -> EL1 return paths to prevent slipping hypervisor
SErrors to guests.
3. Place dsb/isb in context switch to isolate the SErrors between 2 vCPUs.
* `forward`:
The hypervisor will not distinguish guest SErrors from hypervisor SErrors.
All SErrors will be forwarded to guests, except the SErrors generated when
idle vCPU is running. The idle domain doesn't have the ability to hanle the
SErrors, so we have to crash the whole system when we get SErros with idle
vCPU. This option will avoid most overhead of the dsb/isb, except the dsb/isb
in context switch which is used to isolate the SErrors between 2 vCPUs.
* `panic`:
The hypervisor will not distinguish guest SErrors from hypervisor SErrors.
All SErrors will crash the whole system. This option will avoid all overhead
of the dsb/isb.
Wei Chen (18):
xen/arm: Introduce a helper to get default HCR_EL2 flags
xen/arm: Restore HCR_EL2 register
xen/arm: Avoid setting/clearing HCR_RW at every context switch
xen/arm: Save HCR_EL2 when a guest took the SError
xen/arm: Save ESR_EL2 to avoid using mismatched value in syndrome
check
xen/arm: Introduce a virtual abort injection helper
xen/arm: Introduce a command line parameter for SErrors/Aborts
xen/arm: Introduce a initcall to update cpu_hwcaps by serror_op
xen/arm64: Use alternative to skip the check of pending serrors
xen/arm32: Use cpu_hwcaps to skip the check of pending serrors
xen/arm: Move macro VABORT_GEN_BY_GUEST to common header
xen/arm: Introduce new helpers to handle guest/hyp SErrors
xen/arm: Replace do_trap_guest_serror with new helpers
xen/arm: Unmask the Abort/SError bit in the exception entries
xen/arm: Introduce a helper to synchronize SError
xen/arm: Isolate the SError between the context switch of 2 vCPUs
xen/arm: Prevent slipping hypervisor SError to guest
xen/arm: Handle guest external abort as guest SError
docs/misc/xen-command-line.markdown | 44 ++++++++
xen/arch/arm/arm32/asm-offsets.c | 1 +
xen/arch/arm/arm32/entry.S | 37 ++++++-
xen/arch/arm/arm32/traps.c | 5 +-
xen/arch/arm/arm64/asm-offsets.c | 1 +
xen/arch/arm/arm64/domctl.c | 6 +
xen/arch/arm/arm64/entry.S | 105 ++++++++----------
xen/arch/arm/domain.c | 9 ++
xen/arch/arm/domain_build.c | 7 ++
xen/arch/arm/p2m.c | 16 ++-
xen/arch/arm/traps.c | 200 ++++++++++++++++++++++++++++++----
xen/include/asm-arm/arm32/processor.h | 12 +-
xen/include/asm-arm/arm64/processor.h | 10 +-
xen/include/asm-arm/cpufeature.h | 3 +-
xen/include/asm-arm/domain.h | 4 +
xen/include/asm-arm/processor.h | 19 +++-
16 files changed, 370 insertions(+), 109 deletions(-)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |