[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 0/9] Memory scrubbing from idle loop
Ping? On 04/14/2017 11:37 AM, Boris Ostrovsky wrote: > When a domain is destroyed the hypervisor must scrub domain's pages before > giving them to another guest in order to prevent leaking the deceased > guest's data. Currently this is done during guest's destruction, possibly > causing very lengthy cleanup process. > > This series adds support for scrubbing released pages from idle loop, > making guest destruction significantly faster. For example, destroying a > 1TB guest can now be completed in 40+ seconds as opposed to about 9 minutes > using existing scrubbing algorithm. > > Briefly, the new algorithm places dirty pages at the end of heap's page list > for each node/zone/order to avoid having to scan full list while searching > for dirty pages. One processor form each node checks whether the node has any > dirty pages and, if such pages are found, scrubs them. Scrubbing itself > happens without holding heap lock so other users may access heap in the > meantime. If while idle loop is scrubbing a particular chunk of pages this > chunk is requested by the heap allocator, scrubbing is immediately stopped. > > On the allocation side, alloc_heap_pages() first tries to satisfy allocation > request using only clean pages. If this is not possible, the search is > repeated and dirty pages are scrubbed by the allocator. > > This series is somewhat based on earlier work by Bob Liu. > > V1: > * Only set PGC_need_scrub bit for the buddy head, thus making it unnecessary > to scan whole buddy > * Fix spin_lock_cb() > * Scrub CPU-less nodes > * ARM support. Note that I have not been able to test this, only built the > binary > * Added scrub test patch (last one). Not sure whether it should be considered > for committing but I have been running with it. > > V2: > * merge_chunks() returns new buddy head > * scrub_free_pages() returns softirq pending status in addition to (factored > out) > status of unscrubbed memory > * spin_lock uses inlined spin_lock_cb() > * scrub debugging code checks whole page, not just the first word. > > V3: > * Keep dirty bit per page > * Simplify merge_chunks() (now merge_and_free_buddy()) > * When scrubbing memmory-only nodes try to find the closest node. > > Deferred: > * Per-node heap locks. In addition to (presumably) improving performance in > general, once they are available we can parallelize scrubbing further by > allowing more than one core per node to do idle loop scrubbing. > * AVX-based scrubbing > * Use idle loop scrubbing during boot. > > > Boris Ostrovsky (9): > mm: Separate free page chunk merging into its own routine > mm: Place unscrubbed pages at the end of pagelist > mm: Scrub pages in alloc_heap_pages() if needed > mm: Scrub memory from idle loop > mm: Do not discard already-scrubbed pages if softirqs are pending > spinlock: Introduce spin_lock_cb() > mm: Keep pages available for allocation while scrubbing > mm: Print number of unscrubbed pages in 'H' debug handler > mm: Make sure pages are scrubbed > > xen/Kconfig.debug | 7 + > xen/arch/arm/domain.c | 13 +- > xen/arch/x86/domain.c | 3 +- > xen/common/page_alloc.c | 529 > ++++++++++++++++++++++++++++++++++++++------ > xen/common/spinlock.c | 9 +- > xen/include/asm-arm/mm.h | 10 + > xen/include/asm-x86/mm.h | 10 + > xen/include/xen/mm.h | 1 + > xen/include/xen/spinlock.h | 8 + > 9 files changed, 513 insertions(+), 77 deletions(-) > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |