[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/xenbus: Don't leak memory when unmapping the ring on HVM backend



On Mon, Aug 10, 2015 at 07:10:38PM +0100, Julien Grall wrote:
> The commit ccc9d90a9a8b5c4ad7e9708ec41f75ff9e98d61d "xenbus_client:
> Extend interface to support multi-page ring" removes the call to
> free_xenballooned_pages in xenbus_unmap_ring_vfree_hvm.
> 
> This will result to not give back the pages to Linux and loose them
> forever. It only happens when the backends are running in HVM domains.
> 
> Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
> 
> ---
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> Cc: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> Cc: David Vrabel <david.vrabel@xxxxxxxxxx>
> Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
> 

Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx>

> Appeared in Linux 4.1. HVM backend, which is always the case on ARM, will
> leak every mapped ring (i.e ~12KB per domain with 1 disk and 1 vif).
> ---
>  drivers/xen/xenbus/xenbus_client.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_client.c 
> b/drivers/xen/xenbus/xenbus_client.c
> index 9ad3272..e303535 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -814,8 +814,10 @@ static int xenbus_unmap_ring_vfree_hvm(struct 
> xenbus_device *dev, void *vaddr)
>  
>       rv = xenbus_unmap_ring(dev, node->handles, node->nr_handles,
>                              addrs);
> -     if (!rv)
> +     if (!rv) {
>               vunmap(vaddr);
> +             free_xenballooned_pages(node->nr_handles, node->hvm.pages);
> +     }
>       else
>               WARN(1, "Leaking %p, size %u page(s)\n", vaddr,
>                    node->nr_handles);
> -- 
> 2.1.4

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.