[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] A good way to speed up the xl destroy time(guest page scrubbing)
On 12/08/2014 04:34 PM, Jan Beulich wrote: >>>> On 07.12.14 at 14:43, <bob.liu@xxxxxxxxxx> wrote: > >> On 12/05/2014 08:24 PM, Jan Beulich wrote: >>>>>> On 05.12.14 at 11:00, <bob.liu@xxxxxxxxxx> wrote: >>>> 5. Potential workaround >>>> 5.1 Use per-cpu list in idle_loop() >>>> Delist a batch of pages from heap_list to a per-cpu list, then scrub the >>>> per-cpu list and free back to heap_list. >>>> >>>> But Jan disagree with this solution: >>>> "You should really drop the idea of removing pages temporarily. >>>> All you need to do is make sure a page being allocated and getting >>>> simultaneously scrubbed by another CPU won't get passed to the >>>> caller until the scrubbing finished." >>> >>> So you don't mention any downsides to this approach. If there are >>> any, please name them. If there aren't, what's the reason not to >>> go this route? >> >> The reason was what you suggested was not very specific, I still have no >> idea how to implement a patch which can "make sure a page being >> allocated and getting simultaneously scrubbed by another CPU won't get >> passed to the caller until the scrubbing finished". > > The scrubbing code would need to mark the page, and the allocation > code would need to spin on such marked pages until the mark clears. > Thanks a lot, it's more clear! Then do you think it is safe to iterate the heap list without spin lock in the scrubbing code? Konrad also suggested a similar way which was skip marked pages(instead of spin) in the allocator, but I always got panic during page_list_for_each(&heap_list) in the scrubbing code if without locking the heap list. The panic happend in page_list_next(), I think that's because alloc/free path modified the heap list. -- Regards, -Bob _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |