[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3 of 3] x86/mem_sharing: For shared pages with many references, use a hash table instead of a list

  • To: "Tim Deegan" <tim@xxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Wed, 18 Apr 2012 09:18:44 -0700
  • Cc: adin@xxxxxxxxxxxxxx, andres@xxxxxxxxxxxxxx, keir.xen@xxxxxxxxx, xen-devel@xxxxxxxxxxxxx
  • Delivery-date: Wed, 18 Apr 2012 16:19:03 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=mRchV+UuYaW9ajk3jLWVOJLjwW41323M7IkRfLltyVY1 87Qz8NLRXdt+2fF/KXREhAGyyVvlR3e7PUBWuaaWPDDMAK3DHwU1cwnUTfsJMUyN 4Z6mpb7iiWUtQx5lfGV3QHNUbavHS21xh2/fThHiI6aNUlhYKnt0YD3VVYNjeFE=
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

> At 10:16 -0400 on 12 Apr (1334225774), Andres Lagar-Cavilla wrote:
>>  xen/arch/x86/mm/mem_sharing.c     |  170
>> +++++++++++++++++++++++++++++++++++--
>>  xen/include/asm-x86/mem_sharing.h |   13 ++-
>>  2 files changed, 169 insertions(+), 14 deletions(-)
>> For shared frames that have many references, the doubly-linked list used
>> to
>> store the rmap results in costly scans during unshare operations. To
>> alleviate
>> the overhead, replace the linked list by a hash table. However, hash
>> tables are
>> space-intensive, so only use them for pages that have "many" (arbitrary
>> threshold) references.
>> Unsharing is heaviliy exercised during domain destroy. In experimental
>> testing,
>> for a domain that points over 100 thousand pages to the same shared
>> frame,
>> domain destruction dropped from over 7 minutes(!) to less than two
>> seconds.
> If you're adding a new datastructure, would it be better to use a tree,
> since the keys are easily sorted?  Xen already has include/xen/rbtree.h.
> It would have a higher per-entry overhead but you wouldn't need to keep
> the list datastructure as well for the light-sharing case.

My main concern is space. A regular case we deal with is four hundred
thousand shared frames, most with roughly a hundred <domid,gfn>s they
back, some with over a hundred thousand, a few with less than ten
thousand. That's a lot of heap memory for rb trees and nodes! I find O(n)
on less than 256 to be easily tolerable, as well as spending an extra page
only when we're saving thousands.

Nevertheless I'll look into rb tree. Whose only consumer is tmem, if I'm
not mistaken. It doesn't seem like pool objects grow to contain so many
pages, but I could be wrong.

> Tim.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.