[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2.1 06/18] x86/pv: set/clear guest GDT mappings using {populate,destroy}_perdomain_mapping()



On Wed Jan 8, 2025 at 3:11 PM GMT, Roger Pau Monne wrote:
> The pv_{set,destroy}_gdt() functions rely on the L1 table(s) that contain such
> mappings being stashed in the domain structure, and thus such mappings being
> modified by merely updating the L1 entries.
>
> Switch both pv_{set,destroy}_gdt() to instead use
> {populate,destory}_perdomain_mapping().

nit: s/destory/destroy

How come pv_set_gdt() doesn't need to be reordered here (as opposed to v2)?

>
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> Changes since v2:
>  - Do not change ordering setup of arch_set_info_guest().
> ---
>  xen/arch/x86/pv/descriptor-tables.c | 28 ++++++++++++----------------
>  1 file changed, 12 insertions(+), 16 deletions(-)
>
> diff --git a/xen/arch/x86/pv/descriptor-tables.c 
> b/xen/arch/x86/pv/descriptor-tables.c
> index 02647a2c5047..5a79f022ce13 100644
> --- a/xen/arch/x86/pv/descriptor-tables.c
> +++ b/xen/arch/x86/pv/descriptor-tables.c
> @@ -49,23 +49,20 @@ bool pv_destroy_ldt(struct vcpu *v)
>  
>  void pv_destroy_gdt(struct vcpu *v)
>  {
> -    l1_pgentry_t *pl1e = pv_gdt_ptes(v);
> -    mfn_t zero_mfn = _mfn(virt_to_mfn(zero_page));
> -    l1_pgentry_t zero_l1e = l1e_from_mfn(zero_mfn, __PAGE_HYPERVISOR_RO);
>      unsigned int i;
>  
>      ASSERT(v == current || !vcpu_cpu_dirty(v));
>  
> -    v->arch.pv.gdt_ents = 0;
> -    for ( i = 0; i < FIRST_RESERVED_GDT_PAGE; i++ )
> -    {
> -        mfn_t mfn = l1e_get_mfn(pl1e[i]);
> +    if ( v->arch.cr3 )
> +        destroy_perdomain_mapping(v, GDT_VIRT_START(v),
> +                                  ARRAY_SIZE(v->arch.pv.gdt_frames));
>  
> -        if ( (l1e_get_flags(pl1e[i]) & _PAGE_PRESENT) &&
> -             !mfn_eq(mfn, zero_mfn) )
> -            put_page_and_type(mfn_to_page(mfn));
> +    for ( i = 0; i < ARRAY_SIZE(v->arch.pv.gdt_frames); i++)
> +    {
> +        if ( !v->arch.pv.gdt_frames[i] )
> +            break;
>  
> -        l1e_write(&pl1e[i], zero_l1e);
> +        put_page_and_type(mfn_to_page(_mfn(v->arch.pv.gdt_frames[i])));
>          v->arch.pv.gdt_frames[i] = 0;
>      }
>  }
> @@ -74,8 +71,8 @@ int pv_set_gdt(struct vcpu *v, const unsigned long frames[],
>                 unsigned int entries)
>  {
>      struct domain *d = v->domain;
> -    l1_pgentry_t *pl1e;
>      unsigned int i, nr_frames = DIV_ROUND_UP(entries, 512);
> +    mfn_t mfns[ARRAY_SIZE(v->arch.pv.gdt_frames)];
>  
>      ASSERT(v == current || !vcpu_cpu_dirty(v));
>  
> @@ -90,6 +87,8 @@ int pv_set_gdt(struct vcpu *v, const unsigned long frames[],
>          if ( !mfn_valid(mfn) ||
>               !get_page_and_type(mfn_to_page(mfn), d, PGT_seg_desc_page) )
>              goto fail;
> +
> +        mfns[i] = mfn;
>      }
>  
>      /* Tear down the old GDT. */
> @@ -97,12 +96,9 @@ int pv_set_gdt(struct vcpu *v, const unsigned long 
> frames[],
>  
>      /* Install the new GDT. */
>      v->arch.pv.gdt_ents = entries;
> -    pl1e = pv_gdt_ptes(v);
>      for ( i = 0; i < nr_frames; i++ )
> -    {
>          v->arch.pv.gdt_frames[i] = frames[i];
> -        l1e_write(&pl1e[i], l1e_from_pfn(frames[i], __PAGE_HYPERVISOR_RW));
> -    }
> +    populate_perdomain_mapping(v, GDT_VIRT_START(v), mfns, nr_frames);
>  
>      return 0;
>  

Cheers,
Alejandro



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.