[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V2] xen/arm: Initialize acpi_disabled to true during declaration



Hi,

On 26/03/2025 08:57, Orzel, Michal wrote:


On 25/03/2025 17:54, Oleksandr Tyshchenko wrote:


On 25.03.25 18:09, Julien Grall wrote:


Hi Oleksandr,

Hello Julien


On 25/03/2025 16:05, Oleksandr Tyshchenko wrote:
Furthermore, what happen if we decide to use ACPI afterwards? Wouldn't
this mean that the static regions would be reserved even if ACPI
doesn't
use static memory (all the memory is expected to be given to the
allocator)?
I don't think such hybrid configuration is valid (booting with ACPI yet
declaring reserved regions in DT). See commit:
9c2bc0f24b2ba7082df408b3c33ec9a86bf20cf0

I don't think the commit is preventing hybrid configuration. It is just
saying that the region (which could be a static region because this is
not supported) will be unreserved.

IOW, when booting with Device-Tree you may be able to use static memory.
But if you were booting with ACPI, static memory is not supported and
therefore the regions should be free for other purpose.


Julien, I see your points, but the current patch does not attempt to
make static (reserved) memory properly work on ACPI-based system (if it
is available there), current patch tries to solve the real issue on
device-tree-based system with Xen compiled with CONFIG_ACPI=y (at least
unintentionally).

I am not asking to make ACPI work with static memory. I am asking to not
break ACPI if the Device-Tree is specifying static memory region.

However, I wonder, why it has not been noticed so far.

ACPI is not a supported feature and gated by UNSUPPORTED. So the
implication is you have enabled UNSUPPORTED and anything can happen
really ;).

Indeed, this answers the question.




It took some time to understand why just enabling CONFIG_STATIC_MEMORY=y
triggered a BUG in common code. And it turned out that it was
CONFIG_ACPI=y in my Xen's .config that caused that consequence (I
specially wrote so long description to provide full context).

As I wrote above, the only thing I am asking is that memory for static
regions should be unreserved when ACPI is enabled like it is already the
case today.

So the concern is that not reserving provided by the device tree static
memory region is going to be a potential waste of the memory on the real
ACPI system? Or I missed something?

I see two options with unreserving the static memory regions on the real
ACPI system. I assume, that we should unreserve only after we definitely
know that we are running with ACPI (i.e. after acpi_boot_table_init()
completes and acpi_disabled reflects the real state of things), right?

1. either call acpi_boot_table_init() before setup_mm() in Arm64's
start_xen().
This cannot be done. acpi_boot_table_init() calls ACPI functions that make use
of VMAP and alloc_boot_pages, so the boot allocator is expected to be populated
at this point.

2. or go through reserved_mem->nr_banks and unreserve everything marked
with MEMBANK_STATIC_DOMAIN after acpi_boot_table_init() completes
What if we split acpi_boot_table_init() into 2 parts, where first is used to
determine the real "acpi_disabled" value, called before setup_mm and second used
to parse the tables?

That would be fine with me in principle.

There's one issue with this approach. Today, even after we
evaluate that we should run with ACPI enabled [*], error in ACPI table parsing
is ignored and ACPI is set to disabled. That's not really in the spirit of our
current methodology, where we should panic if user requested something that does
not work. Example: today, even after specifying "acpi=force", error in table
parsing bails out to DT approach which is not what user wanted.

If you plan to send such patch, can you have a look at [1]? This was an attempt to panic if we can't enable ACPI and the user requested it.

[1] https://lore.kernel.org/all/1486149538-20432-6-git-send-email-julien.grall@xxxxxxx/


~Michal


--
Julien Grall




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.