[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/9] mm: Scrub pages in alloc_heap_pages() if needed



>>> On 14.04.17 at 17:37, <boris.ostrovsky@xxxxxxxxxx> wrote:
> When allocating pages in alloc_heap_pages() first look for clean pages.

As expressed before, there are cases when we don't really need
scrubbed pages. Hence the local variable "use_unscrubbed" below
should really be some form of input to alloc_heap_pages().

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -700,34 +700,17 @@ static struct page_info *alloc_heap_pages(
>      unsigned int order, unsigned int memflags,
>      struct domain *d)
>  {
> -    unsigned int i, j, zone = 0, nodemask_retry = 0;
> -    nodeid_t first_node, node = MEMF_get_node(memflags), req_node = node;
> +    unsigned int i, j, zone, nodemask_retry;
> +    nodeid_t first_node, node, req_node;
>      unsigned long request = 1UL << order;
>      struct page_info *pg;
> -    nodemask_t nodemask = (d != NULL ) ? d->node_affinity : node_online_map;
> -    bool_t need_tlbflush = 0;
> +    nodemask_t nodemask;
> +    bool need_scrub, need_tlbflush = false, use_unscrubbed = false;
>      uint32_t tlbflush_timestamp = 0;
>  
>      /* Make sure there are enough bits in memflags for nodeID. */
>      BUILD_BUG_ON((_MEMF_bits - _MEMF_node) < (8 * sizeof(nodeid_t)));
>  
> -    if ( node == NUMA_NO_NODE )
> -    {
> -        if ( d != NULL )
> -        {
> -            node = next_node(d->last_alloc_node, nodemask);
> -            if ( node >= MAX_NUMNODES )
> -                node = first_node(nodemask);
> -        }
> -        if ( node >= MAX_NUMNODES )
> -            node = cpu_to_node(smp_processor_id());
> -    }
> -    first_node = node;
> -
> -    ASSERT(node < MAX_NUMNODES);
> -    ASSERT(zone_lo <= zone_hi);
> -    ASSERT(zone_hi < NR_ZONES);

The last two can remain where they are (but see also below).

> @@ -754,6 +740,28 @@ static struct page_info *alloc_heap_pages(
>           tmem_freeable_pages() )
>          goto try_tmem;
>  
> + again:

Is there any hope to get away without such an ugly pseudo loop?
E.g. by making this function a helper function of a relatively thin
wrapper named alloc_heap_pages(), invoking this helper twice
unless use_unscrubbed is true (as said, this ought to be an input)?

> +    nodemask_retry = 0;
> +    nodemask = (d != NULL ) ? d->node_affinity : node_online_map;

Stray blank before closing paren; you may want to consider dropping
the != NULL altogether, at which point the parens won't be needed
anymore (unless, of course, all the code churn here can be undone
anyway).

> @@ -769,8 +777,16 @@ static struct page_info *alloc_heap_pages(
>  
>              /* Find smallest order which can satisfy the request. */
>              for ( j = order; j <= MAX_ORDER; j++ )
> +            {
>                  if ( (pg = page_list_remove_head(&heap(node, zone, j))) )
> -                    goto found;
> +                {
> +                    if ( (order == 0) || use_unscrubbed ||

Why is order-0 a special case here?


> @@ -832,6 +864,20 @@ static struct page_info *alloc_heap_pages(
>      if ( d != NULL )
>          d->last_alloc_node = node;
>  
> +    if ( need_scrub )
> +    {
> +        for ( i = 0; i < (1 << order); i++ )
> +        {
> +            if ( test_bit(_PGC_need_scrub, &pg[i].count_info) )
> +            {
> +                scrub_one_page(&pg[i]);
> +                pg[i].count_info &= ~PGC_need_scrub;

This is pointless, which will become more obvious once you move
this loop body into ...

> +                node_need_scrub[node]--;
> +            }
> +        }
> +        pg->u.free.dirty_head = false;
> +    }
> +
>      for ( i = 0; i < (1 << order); i++ )
>      {
>          /* Reference count must continuously be zero for free pages. */

... the already existing loop here. The "pointless" part is because
of

        pg[i].count_info = PGC_state_inuse;

but of course you'd need to adjust the immediately preceding
BUG_ON().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.