|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] tools/tests/mem-sharing/memshrtool share-all test
On Fri, 2013-03-22 at 19:25 +0000, Andres Lagar-Cavilla wrote:
> On Mar 21, 2013, at 8:17 AM, Tim Deegan <tim@xxxxxxx> wrote:
>
> > At 09:34 -0400 on 18 Mar (1363599276), Tamas Lengyel wrote:
> >> Update memshrtool test program to allow sharing of all pages of two domains
> >> with identical memory sizes. Currently the tool only allows sharing of
> >> specific pages. With this patch we can quickly share all pages between
> >> clones
> >> and check how many pages were successfully deduplicated. The pages' content
> >> are not checked, therefore this mode is only safe for clone domains.
> >
> > Cc'ing Andres, who wrote the original tool.
> >
> > Tim.
> >
> >> v2: fix typo of source_info
> >>
> >> Signed-off-by: Tamas Lengyel <tamas.lengyel@xxxxxxxxxxxx>
> Just a few minute comments.
>
> The code in itself is correct as a first attempt. I am tempted to ack
> it on the basis of being a useful thing.
Did you conclude that you would ack it in the end or not?
WRT the freeze it seems this is new standalone functionality in a test
tool, which ought to be pretty safe. George CCd.
Ian.
>
> However, I have concerns of a higher level, and I see one important problem
> outlined below.
>
> In terms of higher level:
> - Are these really clone VMs? In order to nominate gfns, they must be
> allocated â so, what was allocated in the target VM before this? How would
> you share two 16GB domains if you have 2GB free before allocating the target
> domain (setting aside how do you deal with CoW overflow, which is a separate
> issue). You may consider revisiting the add to physmap sharing memop.
> - Can you document when should one call this? Or at least your envisioned
> scenario. Ties in with the question before.
> - I think it's high time we batch sharing calls. I have not been able to do
> this, but it should be relatively simple to submit a hypervisor patch to
> achieve this. For what you are trying to do, it will give you a very nice
> boost in performance.
>
> >> ---
> >> tools/tests/mem-sharing/memshrtool.c | 58
> >> ++++++++++++++++++++++++++++++++++
> >> 1 files changed, 58 insertions(+), 0 deletions(-)
> >>
> >> diff --git a/tools/tests/mem-sharing/memshrtool.c
> >> b/tools/tests/mem-sharing/memshrtool.c
> >> index db44294..b3fd415 100644
> >> --- a/tools/tests/mem-sharing/memshrtool.c
> >> +++ b/tools/tests/mem-sharing/memshrtool.c
> >> @@ -10,9 +10,12 @@
> >> #include <errno.h>
> >> #include <string.h>
> >> #include <sys/mman.h>
> >> +#include <inttypes.h>
> >>
> >> #include "xenctrl.h"
> >>
> >> +#define PAGE_SIZE_KB (XC_PAGE_SIZE/1024)
> A matter of style, but in my view this is unneeded, see below.
> >> +
> >> static int usage(const char* prog)
> >> {
> >> printf("usage: %s <command> [args...]\n", prog);
> >> @@ -24,6 +27,8 @@ static int usage(const char* prog)
> >> printf(" share <domid> <gfn> <handle> <source> <source-gfn>
> >> <source-handle>\n");
> >> printf(" - Share two pages.\n");
> >> printf(" unshare <domid> <gfn> - Unshare a page by grabbing a
> >> writable map.\n");
> >> + printf(" share-all <domid> <source>\n");
> >> + printf(" - Share all pages.\n");
> >> printf(" add-to-physmap <domid> <gfn> <source> <source-gfn>
> >> <source-handle>\n");
> >> printf(" - Populate a page in a domain with a
> >> shared page.\n");
> >> printf(" debug-gfn <domid> <gfn> - Debug a particular domain and
> >> gfn.\n");
> >> @@ -131,6 +136,59 @@ int main(int argc, const char** argv)
> >> munmap(map, 4096);
> >> R((int)!map);
> >> }
> >> + else if( !strcasecmp(cmd, "share-all") )
> >> + {
> >> + domid_t domid;
> >> + uint64_t handle;
> >> + domid_t source_domid;
> >> + uint64_t source_handle;
> >> + xc_dominfo_t info, source_info;
> >> + uint64_t pages, source_pages;
> >> + uint64_t total_shared=0;
> >> + int ret;
> >> + uint64_t share_page;
> >> +
> >> + if( argc != 4 )
> >> + return usage(argv[0]);
> >> +
> >> + domid = strtol(argv[2], NULL, 0);
> >> + source_domid = strtol(argv[3], NULL, 0);
> >> +
> >> + ret=xc_domain_getinfo(xch, domid, 1, &info);
> >> + if(ret!=1 || info.domid != domid)
> >> + return usage(argv[0]);
> >> + pages=info.max_memkb/PAGE_SIZE_KB;
> >> 2, cleaner code, more inline with the code base.
> >> +
> >> + ret=xc_domain_getinfo(xch, source_domid, 1, &source_info);
> >> + if(ret!=1 || source_info.domid != source_domid)
> >> + return usage(argv[0]);
> >> + source_pages=source_info.max_memkb/PAGE_SIZE_KB;
> In most scenarios you would need to pause this, particularly as VMs may
> self-modify their physmap (balloon, mmio, etc)
> >> +
> >> + if(pages != source_pages) {
> >> + printf("Page count in source and destination domain doesn't
> >> match "
> to stderr.
> >> + "(source: %"PRIu64", destination %"PRIu64")\n",
> >> source_pages, pages);
> >> + return 1;
> >> + }
> >> +
> >> + for(share_page=0;share_page<=pages;++share_page) {
> The memory layout of an hvm is sparse. While tot pages will get you a lot of
> sharing, it will not get you all. For example, for a VM with nominal 4GB of
> RAM, the max gfn is around 4.25GB. Even for small VMs, you have gfns in the
> 3.75-4GB range. You should check equality of max gfn, which might be a very
> difficult thing to achieve depending on the stage of a VM's lifetime at which
> you call this. And you should have a policy for dealing with physmap holes
> (for example, is there any point in sharing the VGA mmio? yes/no, your call,
> argue for it, document it, etc)
>
> Andres
> >> +
> >> + ret=xc_memshr_nominate_gfn(xch, domid, share_page, &handle);
> >> + if(ret<0) {
> >> + continue;
> >> + }
> >> +
> >> + ret=xc_memshr_nominate_gfn(xch, source_domid, share_page,
> >> &source_handle);
> >> + if(ret<0) {
> >> + continue;
> >> + }
> >> +
> >> + ret=xc_memshr_share_gfns(xch, source_domid, share_page,
> >> source_handle, domid, share_page, handle);
> >> + if(ret>=0) total_shared++;
> >> + }
> >> +
> >> + printf("Shared pages: %"PRIu64" out of %"PRIu64"\n",
> >> total_shared, pages);
> >> +
> >> + }
> >> else if( !strcasecmp(cmd, "add-to-physmap") )
> >> {
> >> domid_t domid;
> >> --
> >> 1.7.2.5
> >>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@xxxxxxxxxxxxx
> >> http://lists.xen.org/xen-devel
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |