[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7] x86/p2m: use large pages for MMIO mappings



On 02/02/16 15:15, Jan Beulich wrote:
> @@ -2095,6 +2131,30 @@ void *map_domain_gfn(struct p2m_domain *
>      return map_domain_page(*mfn);
>  }
>  
> +static unsigned int mmio_order(const struct domain *d,
> +                               unsigned long start_fn, unsigned long nr)
> +{
> +    if ( !need_iommu(d) || !iommu_use_hap_pt(d) ||
> +         (start_fn & ((1UL << PAGE_ORDER_2M) - 1)) || !(nr >> PAGE_ORDER_2M) 
> )
> +        return PAGE_ORDER_4K;
> +
> +    if ( 0 /*
> +            * Don't use 1Gb pages, to limit the iteration count in
> +            * set_typed_p2m_entry() when it needs to zap M2P entries
> +            * for a RAM range.
> +            */ &&
> +         !(start_fn & ((1UL << PAGE_ORDER_1G) - 1)) && (nr >> PAGE_ORDER_1G) 
> &&
> +         hap_has_1gb )
> +        return PAGE_ORDER_1G;
> +
> +    if ( hap_has_2mb )
> +        return PAGE_ORDER_2M;
> +
> +    return PAGE_ORDER_4K;
> +}

Apologises again for only just spotting this.  PV guests need to be
restricted to 4K mappings.  Currently, this function can return
PAGE_ORDER_2M for PV guests, based on hap_has_2mb.

All the callers are presently gated on !paging_mode_translate(d), which
will exclude PV guests, but it mmio_order() isn't in principle wrong to
call for a PV guest.

With either a suitable PV path, or assertion excluding PV guests,

Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>  (to avoid
another on-list iteration for just this issue)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.