|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] linux_gntshr_munmap: munmap takes a length, not a page count
On 3 Sep 2014, at 16:48, David Scott <dave.scott@xxxxxxxxxx> wrote:
>
> On 3 Sep 2014, at 15:01, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
>
>> On Mon, 2014-09-01 at 13:16 +0100, David Scott wrote:
>>> This fixes a bug where if a client shares more than 1 page, the
>>> munmap call fails to clean up everything. A process which does
>>> a lot of sharing and unsharing can run out of resources.
>>
>> The doc comment on xc_gntshr_munmap does say count is in pages, so your
>> change is correct but all of the in tree callers seem to pass a number
>> of bytes not a number of pages. i.e. everything in tools/libvchan passes
>> n*PAGE_SIZE.
>
> Good point.
>
>> So I don't think this change is complete without also updating those.
>
> I’ll resubmit with the in-tree callers fixed.
Also it looks like xc_gnttab_munmap (gnttab not gntshr) uses pages in both the
API and implementation, but libvchan is supplying bytes. I’ll fix vchan here
too.
Dave
>
> Thanks,
> Dave
>
>>
>> Ian.
>>
>>>
>>> Signed-off-by: David Scott <dave.scott@xxxxxxxxxx>
>>> ---
>>> tools/libxc/xc_linux_osdep.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/tools/libxc/xc_linux_osdep.c b/tools/libxc/xc_linux_osdep.c
>>> index 86bff3e..a19e4b6 100644
>>> --- a/tools/libxc/xc_linux_osdep.c
>>> +++ b/tools/libxc/xc_linux_osdep.c
>>> @@ -847,7 +847,7 @@ static void *linux_gntshr_share_pages(xc_gntshr *xch,
>>> xc_osdep_handle h,
>>> static int linux_gntshr_munmap(xc_gntshr *xcg, xc_osdep_handle h,
>>> void *start_address, uint32_t count)
>>> {
>>> - return munmap(start_address, count);
>>> + return munmap(start_address, count * XC_PAGE_SIZE);
>>> }
>>>
>>> static struct xc_osdep_ops linux_gntshr_ops = {
>>
>>
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |