[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 0 of 3] RFC: x86 memory sharing performance improvements
Please scratch the RFC tag. I'm done testing this and now I'm asking for inclusion in the tree. Thanks, Andres ----------------------------------------------------------------------- This is an RFC series. I haven't fully tested it yet, but I want the concept to be known as I intend this to be merged prior to the closing of the 4.2 window. The sharing subsystem does not scale elegantly with high degrees of page sharing. The culprit is a reverse map that each shared frame maintains, resolving to all domain pages pointing to the shared frame. Because the rmap is implemented with a O(n) search linked-list, CoW unsharing can result in prolonged search times. The place where this becomes most obvious is during domain destruction, during which all shared p2m entries need to be unshared. Destroying a domain with a lot of sharing could result in minutes of hypervisor freeze-up! Solutions proposed: - Make the p2m clean up of shared entries part of the preemptible, synchronous, domain_kill domctl (as opposed to executing monolithically in the finalize destruction RCU callback) - When a shared frame exceeds an arbitrary ref count, mutate the rmap from a linked list to a hash table. Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx> xen/arch/x86/domain.c | 16 +++- xen/arch/x86/mm/mem_sharing.c | 45 ++++++++++ xen/arch/x86/mm/p2m.c | 4 + xen/include/asm-arm/mm.h | 4 + xen/include/asm-x86/domain.h | 1 + xen/include/asm-x86/mem_sharing.h | 10 ++ xen/include/asm-x86/p2m.h | 4 + xen/arch/x86/mm/mem_sharing.c | 142 +++++++++++++++++++++++-------- xen/arch/x86/mm/mem_sharing.c | 170 +++++++++++++++++++++++++++++++++++-- xen/include/asm-x86/mem_sharing.h | 13 ++- 10 files changed, 354 insertions(+), 55 deletions(-) _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |