[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 3/4] x86: limit issuing of IBPB during context switch
On 27.01.2023 18:47, Andrew Cooper wrote: > On 27/01/2023 7:51 am, Jan Beulich wrote: >> On 26.01.2023 21:49, Andrew Cooper wrote: >>> On 25/01/2023 3:26 pm, Jan Beulich wrote: >>>> --- a/xen/arch/x86/domain.c >>>> +++ b/xen/arch/x86/domain.c >>>> @@ -2015,7 +2015,8 @@ void context_switch(struct vcpu *prev, s >>>> >>>> ctxt_switch_levelling(next); >>>> >>>> - if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) ) >>>> + if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) && >>>> + !(prevd->arch.spec_ctrl_flags & SCF_entry_ibpb) ) >>>> { >>>> static DEFINE_PER_CPU(unsigned int, last); >>>> unsigned int *last_id = &this_cpu(last); >>>> >>>> >>> The aforementioned naming change makes the (marginal) security hole here >>> more obvious. >>> >>> When we use entry-IBPB to protect Xen, we only care about the branch >>> types in the BTB. We don't flush the RSB when using the SMEP optimisation. >>> >>> Therefore, entry-IBPB is not something which lets us safely skip >>> exit-new-pred-context. >> Yet what's to be my takeaway? You may be suggesting to drop the patch, >> or you may be suggesting to tighten the condition. (My guess would be >> the former.) > > Well - the patch can't be committed in this form. > > I haven't figured out if there is a nice way to solve this, so I'm > afraid I don't have a helpful answer to your question. I think it is > reasonable to wait a bit and see if a solution comes to mind. > > I'm not aversed in principle to having this optimisation (well - a > working version of it), but honestly, its marginal right now and it has > to be weighed against whatever extra complexity is required to not open > the security hole. Well, the condition can be extended accordingly (see patch fragment at the end), but the question is going to be whether the more complex logic is going to be worth the savings (especially in case it needs further extending, i.e. if I still have overlooked something). > And FYI, this (general issue) was precisely why ended up not trying to > do this rearranging in XSA-407/422. I almost missed this security hole > and acked the patch. I appreciate you having forced us to be cautious at that time. Jan --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -2067,17 +2067,26 @@ void context_switch(struct vcpu *prev, s } else { + unsigned int feat_sc_rsb = X86_FEATURE_SC_RSB_HVM; + __context_switch(); /* Re-enable interrupts before restoring state which may fault. */ local_irq_enable(); if ( is_pv_domain(nextd) ) + { load_segments(next); + feat_sc_rsb = X86_FEATURE_SC_RSB_PV; + } + ctxt_switch_levelling(next); - if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) ) + if ( opt_ibpb_ctxt_switch && !is_idle_domain(nextd) && + (!(prevd->arch.spec_ctrl_flags & SCF_entry_ibpb) || + /* is_idle_domain(prevd) || */ + !boot_cpu_has(feat_sc_rsb)) ) { static DEFINE_PER_CPU(unsigned int, last); unsigned int *last_id = &this_cpu(last);
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |