[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 4] xen, pod: Zero-check recently populated pages (checklast)


  • To: xen-devel@xxxxxxxxxxxxx
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Wed, 27 Jun 2012 10:44:11 -0700
  • Cc: george.dunlap@xxxxxxxxxxxxx
  • Delivery-date: Wed, 27 Jun 2012 17:44:47 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=W3B9do+mLQ0MeHrj5DxB/hmJjTQoD2k42gQTsFRkY1Am R2P8zimMlWip6DKX/MLKs7+g4kuoJIqZ9hZ+8R2eNWCdevQWa9mwRsrQLkZz5wqk jhCbWhhaeGdwHmz3ugWQj5Uae7mCSzGFCyWi89yHxjTydsm+Nkcrgbk36I1qBEw=
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

> # HG changeset patch
> # User George Dunlap <george.dunlap@xxxxxxxxxxxxx>
> # Date 1340815812 -3600
> # Node ID ea827c449088a1017b6e5a9564eb33df70f8a9c6
> # Parent  b4e1fec1c98f6cbad666a972f473854518c25500
> xen,pod: Zero-check recently populated pages (checklast)
>
> When demand-populating pages due to guest accesses, check recently
> populated
> pages to see if we can reclaim them for the cache.  This should keep the
> PoD
> cache filled when the start-of-day scrubber is going through.
>
> The number 128 was chosen by experiment.  Windows does its page
> scrubbing in parallel; while a small nubmer like 4 works well for
> single VMs, it breaks down as multiple vcpus are scrubbing different
> pages in parallel.  Increasing to 128 works well for higher numbers of
> vcpus.
>
> v2:
>  - Wrapped some long lines
>  - unsigned int for index, unsigned long for array
>
> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
>
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -909,6 +909,27 @@ p2m_pod_emergency_sweep_super(struct p2m
>      p2m->pod.reclaim_super = i ? i - SUPERPAGE_PAGES : 0;
>  }
>
> +/* When populating a new superpage, look at recently populated superpages
> + * hoping that they've been zeroed.  This will snap up zeroed pages as
> soon as
> + * the guest OS is done with them. */
> +static void
> +p2m_pod_check_last_super(struct p2m_domain *p2m, unsigned long
> gfn_aligned)
> +{
> +    unsigned long check_gfn;
> +
> +    ASSERT(p2m->pod.last_populated_index < POD_HISTORY_MAX);
> +
> +    check_gfn = p2m->pod.last_populated[p2m->pod.last_populated_index];
> +
> +    p2m->pod.last_populated[p2m->pod.last_populated_index] = gfn_aligned;
> +
> +    p2m->pod.last_populated_index =
> +        ( p2m->pod.last_populated_index + 1 ) % POD_HISTORY_MAX;
> +
> +    p2m->pod.reclaim_super += p2m_pod_zero_check_superpage(p2m,
> check_gfn);
> +}
> +
> +
>  #define POD_SWEEP_STRIDE  16
>  static void
>  p2m_pod_emergency_sweep(struct p2m_domain *p2m)
> @@ -1066,6 +1087,12 @@ p2m_pod_demand_populate(struct p2m_domai
>          __trace_var(TRC_MEM_POD_POPULATE, 0, sizeof(t), &t);
>      }
>
> +    /* Check the last guest demand-populate */
> +    if ( p2m->pod.entry_count > p2m->pod.count
> +         && order == 9
s/9/PAGE_ORDER_2M/

Otherwise the series looks good to me.
Andres
> +         && q & P2M_ALLOC )
> +        p2m_pod_check_last_super(p2m, gfn_aligned);
> +
>      pod_unlock(p2m);
>      return 0;
>  out_of_memory:
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -287,6 +287,10 @@ struct p2m_domain {
>          unsigned         reclaim_super; /* Last gpfn of a scan */
>          unsigned         reclaim_single; /* Last gpfn of a scan */
>          unsigned         max_guest;    /* gpfn of max guest
> demand-populate */
> +#define POD_HISTORY_MAX 128
> +        /* gpfn of last guest superpage demand-populated */
> +        unsigned long    last_populated[POD_HISTORY_MAX];
> +        unsigned int     last_populated_index;
>          mm_lock_t        lock;         /* Locking of private pod structs,
>   *
>                                          * not relying on the p2m lock.
>   */
>      } pod;
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.