|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 2/9] mm: Place unscrubbed pages at the end of pagelist
On 08/14/2017 06:37 AM, Jan Beulich wrote:
>>>> On 08.08.17 at 23:45, <boris.ostrovsky@xxxxxxxxxx> wrote:
>> .. so that it's easy to find pages that need to be scrubbed (those pages are
>> now marked with _PGC_need_scrub bit).
>>
>> We keep track of the first unscrubbed page in a page buddy using first_dirty
>> field. For now it can have two values, 0 (whole buddy needs scrubbing) or
>> INVALID_DIRTY_IDX (the buddy does not need to be scrubbed). Subsequent
>> patches
>> will allow scrubbing to be interrupted, resulting in first_dirty taking any
>> value.
>>
>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
> with one remark:
>
>> --- a/xen/include/asm-x86/mm.h
>> +++ b/xen/include/asm-x86/mm.h
>> @@ -88,7 +88,15 @@ struct page_info
>> /* Page is on a free list: ((count_info & PGC_count_mask) == 0). */
>> struct {
>> /* Do TLBs need flushing for safety before next page use? */
>> - bool_t need_tlbflush;
>> + bool need_tlbflush:1;
>> +
>> + /*
>> + * Index of the first *possibly* unscrubbed page in the buddy.
>> + * One more bit than maximum possible order to accommodate
>> + * INVALID_DIRTY_IDX.
>> + */
>> +#define INVALID_DIRTY_IDX ((1UL << (MAX_ORDER + 1)) - 1)
>> + unsigned long first_dirty:MAX_ORDER + 1;
>> } free;
> I think generated code will be better with the two fields swapped:
> That way reading first_dirty won't involve a shift, and accessing a
> single bit doesn't require shifts at all on many architectures.
Ok, I will then keep need_tlbflush as the last field so the final struct
(as defined in patch 7) will look like
struct {
unsigned long first_dirty:MAX_ORDER + 1;
unsigned long scrub_state:2;
bool need_tlbflush:1;
};
-boris
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |