[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH] xen: free_domheap_pages: delay page scrub to tasklet

On 05/20/2014 02:27 PM, Jan Beulich wrote:
>>>> On 20.05.14 at 04:14, <bob.liu@xxxxxxxxxx> wrote:
>> On 05/19/2014 07:34 PM, Jan Beulich wrote:
>>>>>> On 19.05.14 at 04:57, <lliubbo@xxxxxxxxx> wrote:
>>>> This patch try to delay scrub_one_page() to a tasklet which will be 
>>>> scheduled on
>>>> all online physical cpus, so that it's much faster to return from 'xl/xm
>>>> destroy xxx'.
>>> At the price of impacting all other guests. I think this is too simplistic
>>> an approach. For one, I think the behavior ought to be configurable
>>> by the admin: Deferring the scrubbing means you can't use the
>>> memory for creating a new guest right away. And then you should
>>> be doing this only on idle CPUs, or (with care not to introduce
>>> security issues nor exhaustion of the DMA region) on CPUs actively
>>> requesting memory, where the request can't be fulfilled without using
>>> some of the not yet scrubbed memory.
>>> And btw., 10 min of cleanup time for 1Tb seems rather much
>>> independent of the specific scrubber behavior - did you check
>>> whether decreasing the rate at which relinquish_memory() calls
>>> hypercall_preempt_check() wouldn't already reduce this be quite
>>> a bit?
>> I tried to call hypercall_preempt_check() every 10000 page, but the time
>> didn't get any reduced.
> So if you have the system scrub 1Tb at boot (via suitable
> dom0_mem=), how long does that take?

I only have a 32G machine, the 1Tb bug was reported by our testing engineer.

On 32G machine, if set dom0_mem=2G the scrub time in "(XEN) Scrubbing
Free RAM:" is around 12s at boot.

The xl destroy time for a 30G guest is always around 15s even decreased
the rate of calling hypercall_preempt_check().


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.