[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.6] xen/mm: populate_physmap: validate correctly the gfn for direct mapped domain



On Tue, 2015-08-11 at 18:41 +0100, Julien Grall wrote:
> Direct mapped domain has already the memory allocated 1:1, so we are
> directly using the gfn as mfn to map the RAM in the guest.
> 
> While we are validating that the page associated to the first mfn belongs 
> to
> the domain, the subsequent MFN are not validated when the extent_order
> is > 0.
> 
> This may result to map memory region (MMIO, RAM) which doesn't belong to 
> the
> domain.
> 
> Although, only DOM0 on ARM is using a direct memory mapped. So it
> doesn't affect any guest (at least on the upstream version) or even x86.
> 
> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
> 
> ---
> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> Cc: Keir Fraser <keir@xxxxxxx>
> Cc: Tim Deegan <tim@xxxxxxx>
> 
>     This patch is a candidate for Xen 4.6 and backport to Xen 4.5 (and
>     maybe 4.4).
> 
>     The hypercall is used for the ballooning code and only DOM0 on ARM
>     is a direct mapped domain.
> 
>     The current code to validate the GFN passed by the populate physmap
>     hypercall has been moved in a for loop in order to validate each
>     GFN when the extent order is > 0.
> 
>     Without it, direct mapping domain may be able to map memory region
>     which doesn't belong to it.
> 
>     I think the switch to the for loop is pretty safe and contained. The
>     worst thing that could happen is denying mapping more often than we
>     use to do. Although, IIRC Linux is mostly mapping page one by one
>     (i.e extent order = 0).
> ---
>  xen/common/memory.c | 30 ++++++++++++++++++------------
>  1 file changed, 18 insertions(+), 12 deletions(-)
> 
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 61bb94c..7cc8af4 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -126,22 +126,28 @@ static void populate_physmap(struct memop_args *a)
>              if ( is_domain_direct_mapped(d) )
>              {
>                  mfn = gpfn;
> -                if ( !mfn_valid(mfn) )
> +
> +                for ( j = 0; j < (1 << a->extent_order); j++, mfn++ )
>                  {
> -                    gdprintk(XENLOG_INFO, "Invalid mfn 
> %#"PRI_xen_pfn"\n",
> +                    if ( !mfn_valid(mfn) )
> +                    {
> +                        gdprintk(XENLOG_INFO, "Invalid mfn 
> %#"PRI_xen_pfn"\n",
>                               mfn);
> -                    goto out;
> +                        goto out;
> +                    }
> +
> +                    page = mfn_to_page(mfn);
> +                    if ( !get_page(page, d) )
> +                    {
> +                        gdprintk(XENLOG_INFO,
> +                                 "mfn %#"PRI_xen_pfn" doesn't belong to 
> the"
> +                                 " domain\n", mfn);
> +                        goto out;

This will leak the references on any earlier pages, which will happen all
the time if you are cross a boundary from RAM into e.g. MMIO or whatever.

A variant of get_page which took an order argument and returned either with
0 or 1<<order refs taken would perhaps make this easier to deal with. I
suppose we don't already have one of those, but you could add it.

Oh, am I wrong and we do get_page then immediately put_page for every page?
I suppose that is one way to check the current owner, but I think it needs
a comment at least (it did before TBH, just iterating over many pages makes
it seem even odder).

Is that really the best way to ask if a page is owned by a given domain tho
ugh? Does page_get_owner not suffice?

> +                    }
> +                    put_page(page);
>                  }
>  
> -                page = mfn_to_page(mfn);
> -                if ( !get_page(page, d) )
> -                {
> -                    gdprintk(XENLOG_INFO,
> -                             "mfn %#"PRI_xen_pfn" doesn't belong to the"
> -                             " domain\n", mfn);
> -                    goto out;
> -                }
> -                put_page(page);
> +                page = mfn_to_page(gpfn);
>              }
>              else
>                  page = alloc_domheap_pages(d, a->extent_order, a
> ->memflags);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.