[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xenpaging:add a new array to speed up page-in in xenpaging
Sorry, this mail is sent out in hurry. In fact we send out 2 patches , and this is only one patch. The second patch is the key. The second patch is about adding a fast way to trigger page-in. Titile is [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging. http://lists.xen.org/archives/html/xen-devel/2012-01/msg00210.html In this patch, we get the return value from p2m_mem_paging_populate(). Because we want to known whether this p2mt is changed by balloon or others. After this patch is published, we have received much advice, so a more perfect patch will be sent out later > -----Original Message----- > From: Olaf Hering [mailto:olaf@xxxxxxxxx] > Sent: Wednesday, January 11, 2012 1:40 AM > To: Hongkaixing > Cc: 'Ian Jackson'; xen-devel@xxxxxxxxxxxxxxxxxxx; 'bicky.' > Subject: Re: [Xen-devel] [PATCH] xenpaging:add a new array to speed up > page-in in xenpaging > > On Fri, Jan 06, Hongkaixing wrote: > > > > Why wrap this up in a struct ? > > > > We want to keep the same style with > > > > typedef struct xenpaging_victim { > > /* the gfn of the page to evict */ > > unsigned long gfn; > > } xenpaging_victim_t; > > This dates back to the initial implementation of xenpaging. > > In my testing I started a guest paused, paged it all out, and paged all > back into memory. By monitoring nr_pages I noticed that page-in got > slightly slower over time, so your suggestion to use two indexes will > speed things up a bit. > > > Looking through xenpaging.c, I think its best to have two flat arrays: > unsigned long slot_to_gfn[paging->max_pages]; /* was victims */ > int gfn_to_slot[paging->max_pages]; > > I will prepare a patch to implement this. > > Olaf _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |