[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] xen/arm: Fully initialise struct membanks_hdr fields


  • To: Michal Orzel <michal.orzel@xxxxxxx>
  • From: Luca Fancellu <Luca.Fancellu@xxxxxxx>
  • Date: Thu, 9 Jan 2025 10:27:43 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=spWB7WZJ3mOOCrxR61K99E1n6pyz4HLiqX8OAGxzrsk=; b=PHFpLF/Uo9w27ERHwj+ooeZZS70YfyCLAhtcgHZA2ONDqojMKyPJsxx5/iUOEuQty7oIBD1Z6dlMoSFbAhmzdWzXPSrX+vrXgMso/J0MkY5G0DWS9vrMwI8ZFh9dEq9DnTlBy8mG5IVVkdKMtBMidUpTagWyZU9QRmuCYoHf0Q0zXAQx3cxEsqmftTxwvfhKOAM+AGC52zBm8U1aniV4+rxPZV/vyMDF2EjtK+2QaBGqzg6telDuIAQ7K1ROStD7ETVC8DAwj9bTfZAnIwxMA65i7bOXfhutQSlbw5M//w87LgCV0ADssOdefhWzRyEIrzWARtrJG/mOk9GXiSrIrQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=hcZdPjvvFSeVDjVsjZ7z/VKpehH4SYCQI+qRhp/suM0osbGJoHGCAl7E8qsIPkixh+QHTj4VaG+I4WbLYkLGrVuG63CZW4NpKJfvhn/4IDVUdoG1pUw+h8xEekhgnJWH/+Kty8RYv3NAV9IIFUvneSH6D/DXpnb8QMmaKtZFTTERRsJvxvi7qpset5A8DUlDzMb3QcFlk3vYTQ3GPS7oh2gHa6g7dZr/DD442HTXhz0U5Tm+mlwJGOn2mJG7oBKcIeiZErsedaVs0njk2c4jBnzgBkiZ8Wi6HxXG4nIcSpGHIyxShg+wm2GJefCFB4i3mhuHPITIsmh07Wp0AbFsiw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=arm.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
  • Delivery-date: Thu, 09 Jan 2025 10:27:53 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Thread-index: AQHbYnjhSKvjiFv/ZkKm32Sq0WkbbrMONdAAgAACYACAAAFJAIAAA8eA
  • Thread-topic: [PATCH v2] xen/arm: Fully initialise struct membanks_hdr fields

>> 
>>> 
>>>> ---
>>>> ---
>>>> xen/arch/arm/domain_build.c       | 13 ++++---------
>>>> xen/arch/arm/include/asm/kernel.h |  5 ++++-
>>>> xen/arch/arm/static-shmem.c       |  3 ++-
>>>> xen/include/xen/bootfdt.h         | 16 ++++++++++++++++
>>>> 4 files changed, 26 insertions(+), 11 deletions(-)
>>>> 
>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>> index b072a16249fe..9e3132fb21d8 100644
>>>> --- a/xen/arch/arm/domain_build.c
>>>> +++ b/xen/arch/arm/domain_build.c
>>>> @@ -1039,7 +1039,7 @@ void __init allocate_memory(struct domain *d, struct 
>>>> kernel_info *kinfo)
>>>>     */
>>>>    if ( is_hardware_domain(d) )
>>>>    {
>>>> -        struct membanks *gnttab = xzalloc_flex_struct(struct membanks, 
>>>> bank, 1);
>>>> +        struct membanks *gnttab = membanks_xzalloc(1, RESERVED_MEMORY);
>>>>        /*
>>>>         * Exclude the following regions:
>>>>         * 1) Remove reserved memory
>>>> @@ -1057,13 +1057,10 @@ void __init allocate_memory(struct domain *d, 
>>>> struct kernel_info *kinfo)
>>>>        gnttab->bank[0].start = kinfo->gnttab_start;
>>>>        gnttab->bank[0].size = kinfo->gnttab_size;
>>>> 
>>>> -        hwdom_free_mem = xzalloc_flex_struct(struct membanks, bank,
>>>> -                                             NR_MEM_BANKS);
>>>> +        hwdom_free_mem = membanks_xzalloc(NR_MEM_BANKS, MEMORY);
>>>>        if ( !hwdom_free_mem )
>>>>            goto fail;
>>>> 
>>>> -        hwdom_free_mem->max_banks = NR_MEM_BANKS;
>>>> -
>>>>        if ( find_unallocated_memory(kinfo, mem_banks, 
>>>> ARRAY_SIZE(mem_banks),
>>>>                                     hwdom_free_mem, 
>>>> add_hwdom_free_regions) )
>>>>            goto fail;
>>>> @@ -1293,7 +1290,7 @@ static int __init find_host_extended_regions(const 
>>>> struct kernel_info *kinfo,
>>>>                                             struct membanks *ext_regions)
>>>> {
>>>>    int res;
>>>> -    struct membanks *gnttab = xzalloc_flex_struct(struct membanks, bank, 
>>>> 1);
>>>> +    struct membanks *gnttab = membanks_xzalloc(1, RESERVED_MEMORY);
>>>> 
>>>>    /*
>>>>     * Exclude the following regions:
>>>> @@ -1374,12 +1371,10 @@ int __init make_hypervisor_node(struct domain *d,
>>>>    }
>>>>    else
>>>>    {
>>>> -        ext_regions = xzalloc_flex_struct(struct membanks, bank, 
>>>> NR_MEM_BANKS);
>>>> +        ext_regions = membanks_xzalloc(NR_MEM_BANKS, RESERVED_MEMORY);
>>> I'm a bit confused what is the expectations behind using different types of 
>>> enum region_type, mostly because it can point to
>>> different address spaces depending on the context. Above, you marked gnttab 
>>> as RESERVED_MEMORY (I guess because this
>>> region has already been found - but in fact it is still unused) and 
>>> hwdom_free_mem as MEMORY. So I would at least expect
>>> ext_regions to be of MEMORY type as well. After all both hwdom_free_mem and 
>>> ext_regions contain
>>> banks of unused/free memory (although former lists host memory while latter 
>>> can also contain guest physical
>>> memory). Could you please clarify the intended use?
>> 
>> You are right, that should be MEMORY, my bad! Could it be something 
>> addressable on commit or should I send another one?
> I can do that on commit but first, can you please answer what is the intended 
> use of enum region_type?
> At first I was expecting gnttab to be MEMORY as well. I don't see a 
> difference between a region prepared by Xen
> for domain to store gnttab vs extended regions. Both specify regions of 
> unused address space. It's just that the region
> for gnttab must always be present but extended regions don't have to.

Right, at first I thought gnttab could be reserved memory, but now that you 
pointed out it’s not the right thing to do, I remember
now that these type reflects the device tree grouping for the memory banks, so 
RESERVED_MEMORY is only for these regions
in the /reserved-memory tree, STATIC_SHARED_MEMORY is for the 
'xen,domain-shared-memory-v1’ comaptible node and
MEMORY is for /memory regions.

Now I would say that we could use MEMORY also for regions prepared by Xen as 
long as we don’t need to differentiate them in
a different way from /memory regions.

Please let me know your thoughts.

Cheers,
Luca


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.