[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 04/18] x86/pv: introduce function to populate perdomain area and use it to map Xen GDT



On Fri Jan 10, 2025 at 2:29 PM GMT, Roger Pau Monné wrote:
> On Thu, Jan 09, 2025 at 09:55:44AM +0000, Alejandro Vallejo wrote:
> > On Wed Jan 8, 2025 at 2:26 PM GMT, Roger Pau Monne wrote:
> > > The current code to update the Xen part of the GDT when running a PV guest
> > > relies on caching the direct map address of all the L1 tables used to map 
> > > the
> > > GDT and LDT, so that entries can be modified.
> > >
> > > Introduce a new function that populates the per-domain region, either 
> > > using the
> > > recursive linear mappings when the target vCPU is the current one, or by
> > > directly modifying the L1 table of the per-domain region.
> > >
> > > Using such function to populate per-domain addresses drops the need to 
> > > keep a
> > > reference to per-domain L1 tables previously used to change the per-domain
> > > mappings.
> > >
> > > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > > ---
> > >  xen/arch/x86/domain.c                | 11 +++-
> > >  xen/arch/x86/include/asm/desc.h      |  6 +-
> > >  xen/arch/x86/include/asm/mm.h        |  2 +
> > >  xen/arch/x86/include/asm/processor.h |  5 ++
> > >  xen/arch/x86/mm.c                    | 88 ++++++++++++++++++++++++++++
> > >  xen/arch/x86/smpboot.c               |  6 +-
> > >  xen/arch/x86/traps.c                 | 10 ++--
> > >  7 files changed, 113 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> > > index 1f680bf176ee..0bd0ef7e40f4 100644
> > > --- a/xen/arch/x86/domain.c
> > > +++ b/xen/arch/x86/domain.c
> > > @@ -1953,9 +1953,14 @@ static always_inline bool need_full_gdt(const 
> > > struct domain *d)
> > >  
> > >  static void update_xen_slot_in_full_gdt(const struct vcpu *v, unsigned 
> > > int cpu)
> > >  {
> > > -    l1e_write(pv_gdt_ptes(v) + FIRST_RESERVED_GDT_PAGE,
> > > -              !is_pv_32bit_vcpu(v) ? per_cpu(gdt_l1e, cpu)
> > > -                                   : per_cpu(compat_gdt_l1e, cpu));
> > > +    ASSERT(v != current);
> > 
> > For this assert, and others below. IIUC, curr_vcpu == current when we're
> > properly switched. When we're idling current == idle and curr_vcpu == 
> > prev_ctx.
> > 
> > Granted, calling this in the middle of a lazy idle loop would be weird, but
> > would it make sense for PT consistency to use curr_vcpu here...
>
> Hm, this function is called in a very specific context, and the assert
> intends to reflect that.  TBH I could just drop it, as
> populate_perdomain_mapping() will DTRT also when v == current. The
> expectation for the context is also that current == curr_vcpu.
>
> Note however that if v == current we would need a flush after the
> populate_perdomain_mapping() call, since populate_perdomain_mapping()
> doesn't perform any flushing of the modified entries.  The main
> purpose of the ASSERT() is to notice this.
>
> > > +
> > > +    populate_perdomain_mapping(v,
> > > +                               GDT_VIRT_START(v) +
> > > +                               (FIRST_RESERVED_GDT_PAGE << PAGE_SHIFT),
> > > +                               !is_pv_32bit_vcpu(v) ? &per_cpu(gdt_mfn, 
> > > cpu)
> > > +                                                    : 
> > > &per_cpu(compat_gdt_mfn,
> > > +                                                               cpu), 1);
> > >  }
> > >  
> > >  static void load_full_gdt(const struct vcpu *v, unsigned int cpu)
> > > diff --git a/xen/arch/x86/include/asm/desc.h 
> > > b/xen/arch/x86/include/asm/desc.h
> > > index a1e0807d97ed..33981bfca588 100644
> > > --- a/xen/arch/x86/include/asm/desc.h
> > > +++ b/xen/arch/x86/include/asm/desc.h
> > > @@ -44,6 +44,8 @@
> > >  
> > >  #ifndef __ASSEMBLY__
> > >  
> > > +#include <xen/mm-frame.h>
> > > +
> > >  #define GUEST_KERNEL_RPL(d) (is_pv_32bit_domain(d) ? 1 : 3)
> > >  
> > >  /* Fix up the RPL of a guest segment selector. */
> > > @@ -212,10 +214,10 @@ struct __packed desc_ptr {
> > >  
> > >  extern seg_desc_t boot_gdt[];
> > >  DECLARE_PER_CPU(seg_desc_t *, gdt);
> > > -DECLARE_PER_CPU(l1_pgentry_t, gdt_l1e);
> > > +DECLARE_PER_CPU(mfn_t, gdt_mfn);
> > >  extern seg_desc_t boot_compat_gdt[];
> > >  DECLARE_PER_CPU(seg_desc_t *, compat_gdt);
> > > -DECLARE_PER_CPU(l1_pgentry_t, compat_gdt_l1e);
> > > +DECLARE_PER_CPU(mfn_t, compat_gdt_mfn);
> > >  DECLARE_PER_CPU(bool, full_gdt_loaded);
> > >  
> > >  static inline void lgdt(const struct desc_ptr *gdtr)
> > > diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h
> > > index 6c7e66ee21ab..b50a51327b2b 100644
> > > --- a/xen/arch/x86/include/asm/mm.h
> > > +++ b/xen/arch/x86/include/asm/mm.h
> > > @@ -603,6 +603,8 @@ int compat_arch_memory_op(unsigned long cmd, 
> > > XEN_GUEST_HANDLE_PARAM(void) arg);
> > >  int create_perdomain_mapping(struct domain *d, unsigned long va,
> > >                               unsigned int nr, l1_pgentry_t **pl1tab,
> > >                               struct page_info **ppg);
> > > +void populate_perdomain_mapping(const struct vcpu *v, unsigned long va,
> > > +                                mfn_t *mfn, unsigned long nr);
> > >  void destroy_perdomain_mapping(struct domain *d, unsigned long va,
> > >                                 unsigned int nr);
> > >  void free_perdomain_mappings(struct domain *d);
> > > diff --git a/xen/arch/x86/include/asm/processor.h 
> > > b/xen/arch/x86/include/asm/processor.h
> > > index d247ef8dd226..82ee89f736c2 100644
> > > --- a/xen/arch/x86/include/asm/processor.h
> > > +++ b/xen/arch/x86/include/asm/processor.h
> > > @@ -243,6 +243,11 @@ static inline unsigned long cr3_pa(unsigned long cr3)
> > >      return cr3 & X86_CR3_ADDR_MASK;
> > >  }
> > >  
> > > +static inline mfn_t cr3_mfn(unsigned long cr3)
> > > +{
> > > +    return maddr_to_mfn(cr3_pa(cr3));
> > > +}
> > > +
> > >  static inline unsigned int cr3_pcid(unsigned long cr3)
> > >  {
> > >      return IS_ENABLED(CONFIG_PV) ? cr3 & X86_CR3_PCID_MASK : 0;
> > > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > > index 3d5dd22b6c36..0abea792486c 100644
> > > --- a/xen/arch/x86/mm.c
> > > +++ b/xen/arch/x86/mm.c
> > > @@ -6423,6 +6423,94 @@ int create_perdomain_mapping(struct domain *d, 
> > > unsigned long va,
> > >      return rc;
> > >  }
> > >  
> > > +void populate_perdomain_mapping(const struct vcpu *v, unsigned long va,
> > > +                                mfn_t *mfn, unsigned long nr)
> > > +{
> > > +    l1_pgentry_t *l1tab = NULL, *pl1e;
> > > +    const l3_pgentry_t *l3tab;
> > > +    const l2_pgentry_t *l2tab;
> > > +    struct domain *d = v->domain;
> > > +
> > > +    ASSERT(va >= PERDOMAIN_VIRT_START &&
> > > +           va < PERDOMAIN_VIRT_SLOT(PERDOMAIN_SLOTS));
> > > +    ASSERT(!nr || !l3_table_offset(va ^ (va + nr * PAGE_SIZE - 1)));
> > > +
> > > +    /* Use likely to force the optimization for the fast path. */
> > > +    if ( likely(v == current) )
> > 
> > ... and here? In particular I'd expect using curr_vcpu here means...
>
> I'm afraid not, this is a trap I've fallen originally when doing this
> series, as I indeed had v == curr_vcpu here (and no
> sync_local_execstate() call).
>
> However as a result of an interrupt, a call to sync_local_execstate()
> might happen, at which point the previous check of v == curr_vcpu
> becomes stale.

Wow, that's nasty! More than fair enough then. Guess the XSAVE wrappers (and
more generally all vCPU-local memory accessors) will have to take this into
account before poking into the contents of the perdomain region.

>
> > > +    {
> > > +        unsigned int i;
> > > +
> > > +        /* Ensure page-tables are from current (if current != 
> > > curr_vcpu). */
> > > +        sync_local_execstate();
> > 
> > ... this should not be needed.
>
> As kind of mentioned above, this is required to ensure the page-tables
> are in-sync with the vCPU in current, and cannot change as a result of
> an interrupt triggering a call to sync_local_execstate().
>
> Otherwise the page-tables could change while or after the call to
> populate_perdomain_mapping(), and the mappings could end up being
> created on the wrong page-tables.
>
> Thanks, Roger.

Cheers,
Alejandro



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.