[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Grant access to more than one dom



Hi Daniel,

Thanks for the revision. I attach an updated patch. With regards your
comments:

1.- Renamed share_list field of gntalloc_gref. Now called shares.

2.- As you said, I think the increased logic is worth the effort of
not requiring dynamic memory for the usual case of only one share.

3.- Analyzing the return value of __release_gref I found there is a
possible (unlikely) infinite loop in the original __del_gref
implementation. If gref_id <= 0 then the list element will never be
released. If we have this situation then a memory leak is the the
least of our worries as there is either a problem in the logic or
memory corruption. I propose that under this situation we should just
return OK and carry on. What do you think?

4.- Fixed possible leak in __del_gref

5.- Fixed whitespace changes.

6.- Checked kzalloc return value in gntalloc_ioctl_share.

7.- Removed superfluous INIT_LIST_HEAD in gntalloc_ioctl_share.

8.- Fixed possible gref leak in gntalloc_ioctl_unshare

9.- Re your comment about gntalloc_ioctl_unshare:

> It would be nice if this function also supported freeing the initial grant
> reference, so that it can be used to change the domain ID a page is being
> shared with.

The idea of gntalloc_gref.shares is that there is 1 or more grefs
associated to any allocation. This would mean changing that to 0 or
more, i.e. gntalloc_gref.shares should be changed directly to a list
head and we will require a memory allocation for the original
allocation. The change is simple, shall I do it?

10.- gntalloc.h typos fixed.

11.- Fixed struct ioctl_gntalloc_share_gref member order.

-- 
Best regards,
 Simon                            mailto:furryfuttock@xxxxxxxxx

Attachment: gntalloc.patch
Description: Binary data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.