[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 05/18] x86/mm: switch destroy_perdomain_mapping() parameter from domain to vCPU



On Thu, Jan 09, 2025 at 10:02:19AM +0000, Alejandro Vallejo wrote:
> On Wed Jan 8, 2025 at 2:26 PM GMT, Roger Pau Monne wrote:
> > In preparation for the per-domain area being populated with per-vCPU 
> > mappings
> > change the parameter of destroy_perdomain_mapping() to be a vCPU instead of 
> > a
> > domain, and also update the function logic to allow manipulation of 
> > per-domain
> > mappings using the linear page table mappings.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > ---
> >  xen/arch/x86/include/asm/mm.h |  2 +-
> >  xen/arch/x86/mm.c             | 24 +++++++++++++++++++++++-
> >  xen/arch/x86/pv/domain.c      |  3 +--
> >  xen/arch/x86/x86_64/mm.c      |  2 +-
> >  4 files changed, 26 insertions(+), 5 deletions(-)
> >
> > diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h
> > index b50a51327b2b..65cd751087dc 100644
> > --- a/xen/arch/x86/include/asm/mm.h
> > +++ b/xen/arch/x86/include/asm/mm.h
> > @@ -605,7 +605,7 @@ int create_perdomain_mapping(struct domain *d, unsigned 
> > long va,
> >                               struct page_info **ppg);
> >  void populate_perdomain_mapping(const struct vcpu *v, unsigned long va,
> >                                  mfn_t *mfn, unsigned long nr);
> > -void destroy_perdomain_mapping(struct domain *d, unsigned long va,
> > +void destroy_perdomain_mapping(const struct vcpu *v, unsigned long va,
> >                                 unsigned int nr);
> >  void free_perdomain_mappings(struct domain *d);
> >  
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index 0abea792486c..713ae8dd6fa3 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -6511,10 +6511,11 @@ void populate_perdomain_mapping(const struct vcpu 
> > *v, unsigned long va,
> >      unmap_domain_page(l3tab);
> >  }
> >  
> > -void destroy_perdomain_mapping(struct domain *d, unsigned long va,
> > +void destroy_perdomain_mapping(const struct vcpu *v, unsigned long va,
> >                                 unsigned int nr)
> >  {
> >      const l3_pgentry_t *l3tab, *pl3e;
> > +    const struct domain *d = v->domain;
> >  
> >      ASSERT(va >= PERDOMAIN_VIRT_START &&
> >             va < PERDOMAIN_VIRT_SLOT(PERDOMAIN_SLOTS));
> > @@ -6523,6 +6524,27 @@ void destroy_perdomain_mapping(struct domain *d, 
> > unsigned long va,
> >      if ( !d->arch.perdomain_l3_pg )
> >          return;
> >  
> > +    /* Use likely to force the optimization for the fast path. */
> > +    if ( likely(v == current) )
> 
> As in the previous patch, doesn't using curr_vcpu here...
> 
> > +    {
> > +        l1_pgentry_t *pl1e;
> > +
> > +        /* Ensure page-tables are from current (if current != curr_vcpu). 
> > */
> > +        sync_local_execstate();
> 
> ... avoid the need for this?

See previous reply and the hazards of curr_vcpu changing as a result
of an interrupt triggering a sync_local_execstate() call.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.