|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6] x86/p2m: use large pages for MMIO mappings
On 01/02/16 09:14, Jan Beulich wrote:
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -899,48 +899,64 @@ void p2m_change_type_range(struct domain
> p2m_unlock(p2m);
> }
>
> -/* Returns: 0 for success, -errno for failure */
> +/*
> + * Returns:
> + * 0 for success
> + * -errno for failure
> + * 1 + new order for caller to retry with smaller order (guaranteed
> + * to be smaller than order passed in)
> + */
> static int set_typed_p2m_entry(struct domain *d, unsigned long gfn, mfn_t
> mfn,
> - p2m_type_t gfn_p2mt, p2m_access_t access)
> + unsigned int order, p2m_type_t gfn_p2mt,
> + p2m_access_t access)
> {
> int rc = 0;
> p2m_access_t a;
> p2m_type_t ot;
> mfn_t omfn;
> + unsigned int cur_order = 0;
> struct p2m_domain *p2m = p2m_get_hostp2m(d);
>
> if ( !paging_mode_translate(d) )
> return -EIO;
>
> - gfn_lock(p2m, gfn, 0);
> - omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, NULL, NULL);
> + gfn_lock(p2m, gfn, order);
> + omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, &cur_order, NULL);
> + if ( cur_order < order )
> + {
> + gfn_unlock(p2m, gfn, order);
> + return cur_order + 1;
> + }
> if ( p2m_is_grant(ot) || p2m_is_foreign(ot) )
> {
> - gfn_unlock(p2m, gfn, 0);
> + gfn_unlock(p2m, gfn, order);
> domain_crash(d);
> return -ENOENT;
> }
> else if ( p2m_is_ram(ot) )
> {
> - ASSERT(mfn_valid(omfn));
> - set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY);
> + unsigned long i;
> +
> + for ( i = 0; i < (1UL << order); ++i )
> + {
> + ASSERT(mfn_valid(_mfn(mfn_x(omfn) + i)));
> + set_gpfn_from_mfn(mfn_x(omfn) + i, INVALID_M2P_ENTRY);
On further consideration, shouldn't we have a preemption check here?
Removing a 1GB superpage's worth of RAM mappings is going to execute for
an unreasonably long time.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |