|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v4 3/4] x86: limit issuing of IBPB during context switch
When the outgoing vCPU had IBPB issued and RSB overwritten upon entering
Xen, then there's no need for a 2nd barrier during context switch.
Note that SCF_entry_ibpb is always clear for the idle domain, so no
explicit idle domain check is needed to augment the feature check
(which is simply inapplicable to "idle").
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
v4: Tighten the condition.
v3: Fold into series.
---
I think in principle we could limit the impact from finding the idle
domain as "prevd", by having __context_switch() tell us what kind
domain's vCPU was switched out (it could still be "idle", but in fewer
cases).
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2005,17 +2005,26 @@ void context_switch(struct vcpu *prev, s
}
else
{
+ unsigned int feat_sc_rsb = X86_FEATURE_SC_RSB_HVM;
+
__context_switch();
/* Re-enable interrupts before restoring state which may fault. */
local_irq_enable();
if ( is_pv_domain(nextd) )
+ {
load_segments(next);
+ feat_sc_rsb = X86_FEATURE_SC_RSB_PV;
+ }
+
ctxt_switch_levelling(next);
- if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) )
+ if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) &&
+ (!(prevd->arch.spec_ctrl_flags & SCF_entry_ibpb) ||
+ /* is_idle_domain(prevd) || */
+ !boot_cpu_has(feat_sc_rsb)) )
{
static DEFINE_PER_CPU(unsigned int, last);
unsigned int *last_id = &this_cpu(last);
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |