[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/3] xen/heap: Split init_heap_pages() in two


  • To: Julien Grall <julien@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 18 Jul 2022 12:57:30 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=EcnbUL+ktCxwJctxuLulZm0GoSCc+ICRTdW/m5MlcDM=; b=OzQBn/fnJhQXyFBfIapSD21M8cyUmJnmCvwC36LQJuRam/RreHKWsZc8wH8uH/l82FUUjeZ1s5Po97+Epv6OAlB0AndpoCXLs94uH+GPpQDtS8nUrFfIPuhsbPdwPfWrHPtUaLggnSH0A4s+DGFxsuvWNWU0CeT98EYK8OaIIW2Y7t/3JNN9nccMu6tWwdgSqZmFstO3ujprnnP9b00qqkdTeNwE/UGVrf8d7nZyz1VIYsTm7rAZ9aT7YGMWkrgzVjN00IU9rR48zHIdAz8PBOAplRHxx01lGboHhPoAkoQrl96WhQmqQWOkYQHPMR+1wS7w5DIj4yB+XUb5kS8Dnw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=apaMDCGVcOoXXJpEY2MW/izmFg5EZL+5ntK38DKu4hNwLNr3KByN+XhqhgKvWHY/DvF/UY73c/VO7hu6qHMpm/iEgVMfLxsp6lk4wIaZqPfSvefZjsh0+8lsOt30M3aMj4WStuxYE81aXnQH4M86KZy3HlRVGMNtI1ivzpR187mot4s/ijEpmvZhkZRtCeETEYhVskRiIuwi0sov7xd2AKHSvfGyEEaNgpqk+dQ22VHU7mZ3JgKTJUVBZzP3MsNREUfSAbypA//cMQR9DkO3eSE83Dz5QSQcEKy3iJURT2vfw5aIZjokCZdrLEDHvVqtdk5VyeLavK88kw4kQL7txg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Julien Grall <jgrall@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 18 Jul 2022 10:57:38 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 18.07.2022 12:08, Julien Grall wrote:
> On 18/07/2022 10:31, Jan Beulich wrote:
>> On 15.07.2022 19:03, Julien Grall wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -1778,16 +1778,44 @@ int query_page_offline(mfn_t mfn, uint32_t *status)
>>>   }
>>>   
>>>   /*
>>> - * Hand the specified arbitrary page range to the specified heap zone
>>> - * checking the node_id of the previous page.  If they differ and the
>>> - * latter is not on a MAX_ORDER boundary, then we reserve the page by
>>> - * not freeing it to the buddy allocator.
>>> + * This function should only be called with valid pages from the same NUMA
>>> + * node.
>>>    */
>>> +static void _init_heap_pages(const struct page_info *pg,
>>> +                             unsigned long nr_pages,
>>> +                             bool need_scrub)
>>> +{
>>> +    unsigned long s, e;
>>> +    unsigned int nid = phys_to_nid(page_to_maddr(pg));
>>> +
>>> +    s = mfn_x(page_to_mfn(pg));
>>> +    e = mfn_x(mfn_add(page_to_mfn(pg + nr_pages - 1), 1));
>>> +    if ( unlikely(!avail[nid]) )
>>> +    {
>>> +        bool use_tail = IS_ALIGNED(s, 1UL << MAX_ORDER) &&
>>> +                        (find_first_set_bit(e) <= find_first_set_bit(s));
>>> +        unsigned long n;
>>> +
>>> +        n = init_node_heap(nid, s, nr_pages, &use_tail);
>>> +        BUG_ON(n > nr_pages);
>>> +        if ( use_tail )
>>> +            e -= n;
>>> +        else
>>> +            s += n;
>>> +    }
>>> +
>>> +    while ( s < e )
>>> +    {
>>> +        free_heap_pages(mfn_to_page(_mfn(s)), 0, need_scrub);
>>> +        s += 1UL;
>>
>> ... the more conventional s++ or ++s used here?
> 
> I would prefer to keep using "s += 1UL" here because:
>    * it will be replace with a proper order in the follow-up patch. So 
> this is temporary.

Fair enough.

Jan

>    * one could argue that if I use "s++" then I should also switch to a 
> for loop which would make sense here but not in the next patch.
> 
> Cheers,
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.