[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH 1/1] xen/efi: Config parsing: Free the same page count as allocated.


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Bernhard Kaindl <bernhard.kaindl@xxxxxxxxxx>
  • Date: Wed, 22 Apr 2026 08:46:47 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8a1RXM1tQDcGMOWLsBO8l1ptl3QnA8qqh+dA2O0n26c=; b=b8kitPCN5s4Eh6h/soQpqUb1Joo7owVua7lke997wbHfEkzLe6usZSNcQnLifNRZVRhdQwULwJIkdvlco/PzPB4fBNvqaDuPSBUCKWl3I+jG70KTgxJ+SLXuuqzeIkU+3VONkfuRg2FiLF9/vuklzAlWN8KsPnpaaydgvIxhydYEZY2Z/7zS4QyIU231zXuoi5cnsI8izo7PFH6WfcsD1Nx6RnxS15lbjsS4ZebtTx5GA/wcGNVuW4pdLXoghsNuLdgOTKUbJh/luK2QUhWtaO/I6GD/IKeQiS8PooV5vW4wvH3qs23BIHuTbTPk5z8LEvQxQ/SJF98E0A7xL8zAfQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=g6MG05fQmg/SiM7he6nm4Oo0GDTADvdGsUhJvocpkp5B0IYv5/3ljkK0CK/zzMP4fzmkAIgQkLoVM5vill6tpfr73VDd28NUSwdWxxqJJ2F+0YBNeZz8ZVh7ZYdWoC8sxpXbFcWw1xgThRoqADKrq3fqBnqRoNxMrN/SbhTG1bV4i5t2dOo/MImEmnUjzu6FseWVc83OWD3eRTVzZ4paqJ0fZzlGRk1q8XLQscA8hKhPLcNVx7weJFAoF5eV7suVhn2enOTJ/9OBvtyYD7/ss9b1I9HOmCQAweMGObo69laGvMXvbrf7GuUeMRMsmZwEbAdltneEkH8s0xIJ7XNBZg==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=citrix.com header.i="@citrix.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:x-ms-exchange-senderadcheck"
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>
  • Delivery-date: Wed, 22 Apr 2026 08:46:59 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHc0aotn+eeGsDBDUy6ubJIW0SdtbXqpN2AgAAZdoCAAAJOoA==
  • Thread-topic: [PATCH 1/1] xen/efi: Config parsing: Free the same page count as allocated.

Jan Beulich wrote at 10:20:
> On 22.04.2026 08:48, Jan Beulich wrote:
>
> > As indicated above, I'm not convinced though that we really want to
> > alter file->size. I'd instead add 1 in the few instances of
> > "efi_bs->FreePages(cfg.addr, PFN_UP(cfg.size))". Sadly the adding of 1
> > is conditional in read_section(), so maybe a "need_to_increment"
> > boolean wants adding next to the need_to_free one? Then a small helper
> > function may be warranted for those few instances of freeing cfg.addr.
> 
> Actually, no separate flag looks to be needed. If for the cfg file we set
> ->need_to_free, we know we allocated the one extra byte.

No, also cfg->need_to_free is always set when AllocatePages was successful,
xen/common/efi/boot.c, read_file(), lines 862 to 867:

    ret = efi_bs->AllocatePages(AllocateMaxAddress, EfiLoaderData,
                                PFN_UP(size + (file == &cfg)), &file->addr);
    if ( EFI_ERROR(ret) )
        goto fail;

    file->need_to_free = true;

Given that, I think the EFI maintainers should state their preference on how
to free the allocated pages. It looks safest to add file->nr_allocated_pages.

Then we have a clean, dedicated value. I can update it in this way, but I'll
probably wait for the opinion of the EFI maintainers. It would be cleaner.
 
Bernhard

PS: I'm leaving this for the EFI maintainers for now, as I need to focus on
the review of the NUMA claims series.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.