|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 15/18] x86/mm: introduce a per-vCPU mapcache when using ASI
On Fri Jan 10, 2025 at 3:02 PM GMT, Roger Pau Monné wrote:
> On Thu, Jan 09, 2025 at 03:08:15PM +0000, Alejandro Vallejo wrote:
> > On Wed Jan 8, 2025 at 2:26 PM GMT, Roger Pau Monne wrote:
> > > When using a unique per-vCPU root page table the per-domain region becomes
> > > per-vCPU, and hence the mapcache is no longer shared between all vCPUs of
> > > a
> > > domain. Introduce per-vCPU mapcache structures, and modify
> > > map_domain_page()
> > > to create per-vCPU mappings when possible. Note the lock is also not
> > > needed
> > > with using per-vCPU map caches, as the structure is no longer shared.
> > >
> > > This introduces some duplication in the domain and vcpu structures, as
> > > both
> > > contain a mapcache field to support running with and without per-vCPU
> > > page-tables.
> > >
> > > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > > ---
> > > xen/arch/x86/domain_page.c | 90 ++++++++++++++++++++-----------
> > > xen/arch/x86/include/asm/domain.h | 20 ++++---
> > > 2 files changed, 71 insertions(+), 39 deletions(-)
> > >
> > > diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
> > > index 1372be20224e..65900d6218f8 100644
> > > --- a/xen/arch/x86/domain_page.c
> > > +++ b/xen/arch/x86/domain_page.c
> > > @@ -74,7 +74,9 @@ void *map_domain_page(mfn_t mfn)
> > > struct vcpu *v;
> > > struct mapcache_domain *dcache;
> > > struct mapcache_vcpu *vcache;
> > > + struct mapcache *cache;
> > > struct vcpu_maphash_entry *hashent;
> > > + struct domain *d;
> > >
> > > #ifdef NDEBUG
> > > if ( mfn_x(mfn) <= PFN_DOWN(__pa(HYPERVISOR_VIRT_END - 1)) )
> > > @@ -85,9 +87,12 @@ void *map_domain_page(mfn_t mfn)
> > > if ( !v || !is_pv_vcpu(v) )
> > > return mfn_to_virt(mfn_x(mfn));
> > >
> > > - dcache = &v->domain->arch.pv.mapcache;
> > > + d = v->domain;
> > > + dcache = &d->arch.pv.mapcache;
> > > vcache = &v->arch.pv.mapcache;
> > > - if ( !dcache->inuse )
> > > + cache = d->arch.vcpu_pt ? &v->arch.pv.mapcache.cache
> > > + : &d->arch.pv.mapcache.cache;
> > > + if ( !cache->inuse )
> > > return mfn_to_virt(mfn_x(mfn));
> > >
> > > perfc_incr(map_domain_page_count);
> > > @@ -98,17 +103,18 @@ void *map_domain_page(mfn_t mfn)
> > > if ( hashent->mfn == mfn_x(mfn) )
> > > {
> > > idx = hashent->idx;
> > > - ASSERT(idx < dcache->entries);
> > > + ASSERT(idx < cache->entries);
> > > hashent->refcnt++;
> > > ASSERT(hashent->refcnt);
> > > ASSERT(mfn_eq(l1e_get_mfn(MAPCACHE_L1ENT(idx)), mfn));
> > > goto out;
> > > }
> > >
> > > - spin_lock(&dcache->lock);
> > > + if ( !d->arch.vcpu_pt )
> > > + spin_lock(&dcache->lock);
> >
> > Hmmm. I wonder whether we might not want a nospec here...
>
> Not sure TBH, we have other instances of conditional locking that
> doesn't use nospec(). That said I'm not claiming those are correct.
> Shouldn't people that care about this kind of speculation into
> critical regions just use CONFIG_SPECULATIVE_HARDEN_LOCK?
Do people that care have a choice though? CONFIG_SPECULATIVE_HARDEN_LOCK only
blocks speculation in the taken branch here, so the critical region isn't
hardened when the relaxed branch is followed.
I suspect nospec in the condition would be fine perf-wise because the CPU can
still do straight-line-speculation on the underlying function call when
CONFIG_SPECULATIVE_HARDEN_LOCK is not defined.
It's not the end of the world either way.
>
> Thanks, Roger.
Cheers,
Alejandro
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |