|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 2/9] mm: Place unscrubbed pages at the end of pagelist
>>> On 03.04.17 at 18:50, <boris.ostrovsky@xxxxxxxxxx> wrote:
> @@ -856,6 +874,7 @@ static int reserve_offlined_page(struct page_info *head)
> int zone = page_to_zone(head), i, head_order = PFN_ORDER(head), count =
> 0;
> struct page_info *cur_head;
> int cur_order;
> + bool_t need_scrub = !!test_bit(_PGC_need_scrub, &head->count_info);
With our use of plain bool now there's no need for !! here anymore.
> @@ -897,8 +916,8 @@ static int reserve_offlined_page(struct page_info *head)
> {
> merge:
> /* We don't consider merging outside the head_order. */
> - page_list_add_tail(cur_head, &heap(node, zone, cur_order));
> PFN_ORDER(cur_head) = cur_order;
> + page_list_add_scrub(cur_head, node, zone, cur_order,
> need_scrub);
With this re-arrangement, what's the point of also passing a
separate order argument to the function?
> @@ -933,6 +952,10 @@ static bool_t can_merge(struct page_info *buddy,
> unsigned int node,
> (phys_to_nid(page_to_maddr(buddy)) != node) )
> return false;
>
> + if ( need_scrub !=
> + !!test_bit(_PGC_need_scrub, &buddy->count_info) )
> + return false;
I don't think leaving the tree in a state where larger order chunks
don't become available for allocation right away is going to be
acceptable. Hence with this issue being dealt with only in patch 7
as it seems, you should state clearly and visibly that (at least)
patches 2...7 should only be committed together.
> @@ -952,9 +977,10 @@ static struct page_info *merge_chunks(struct page_info
> *pg, unsigned int node,
> {
> /* Merge with predecessor block? */
> buddy = pg - mask;
> - if ( !can_merge(buddy, node, order) )
> + if ( !can_merge(buddy, node, order, need_scrub) )
> break;
>
> + pg->count_info &= ~PGC_need_scrub;
> pg = buddy;
> page_list_del(pg, &heap(node, zone, order));
> }
> @@ -962,9 +988,10 @@ static struct page_info *merge_chunks(struct page_info
> *pg, unsigned int node,
> {
> /* Merge with successor block? */
> buddy = pg + mask;
> - if ( !can_merge(buddy, node, order) )
> + if ( !can_merge(buddy, node, order, need_scrub) )
> break;
>
> + buddy->count_info &= ~PGC_need_scrub;
> page_list_del(buddy, &heap(node, zone, order));
> }
For both of these, how come you can / want to clear the need-scrub
flag? Wouldn't it be better for each individual page to retain it, so
when encountering a higher-order one you know which pages need
scrubbing and which don't? Couldn't that also be used to avoid
suppressing their merging here right away?
> +static void scrub_free_pages(unsigned int node)
> +{
> + struct page_info *pg;
> + unsigned int i, zone;
> + int order;
There are no negative orders.
> + ASSERT(spin_is_locked(&heap_lock));
> +
> + if ( !node_need_scrub[node] )
> + return;
> +
> + for ( zone = 0; zone < NR_ZONES; zone++ )
> + {
> + for ( order = MAX_ORDER; order >= 0; order-- )
> + {
> + while ( !page_list_empty(&heap(node, zone, order)) )
> + {
> + /* Unscrubbed pages are always at the end of the list. */
> + pg = page_list_last(&heap(node, zone, order));
> + if ( !test_bit(_PGC_need_scrub, &pg->count_info) )
> + break;
> +
> + for ( i = 0; i < (1UL << order); i++)
Types of loop variable and upper bound do not match.
> + scrub_one_page(&pg[i]);
> +
> + pg->count_info &= ~PGC_need_scrub;
> +
> + page_list_del(pg, &heap(node, zone, order));
> + (void)merge_chunks(pg, node, zone, order);
Pointless cast.
> + node_need_scrub[node] -= (1UL << order);
Perhaps worth returning right away if the new value is zero?
> + }
> + }
> + }
> + }
> +
> +
Stray double blank lines
> @@ -1253,7 +1326,7 @@ unsigned int online_page(unsigned long mfn, uint32_t
> *status)
> spin_unlock(&heap_lock);
>
> if ( (y & PGC_state) == PGC_state_offlined )
> - free_heap_pages(pg, 0);
> + free_heap_pages(pg, 0, 0);
false (also elsewhere, and similarly when passing true)
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -233,6 +233,10 @@ struct page_info
> #define PGC_count_width PG_shift(9)
> #define PGC_count_mask ((1UL<<PGC_count_width)-1)
>
> +/* Page needs to be scrubbed */
> +#define _PGC_need_scrub PG_shift(10)
> +#define PGC_need_scrub PG_mask(1, 10)
So why not a new PGC_state_dirty instead of this independent
flag? Pages other than PGC_state_free should never make it
to the scrubber, so the flag is meaningless for all other
PGC_state_*.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |