[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2 3/3] xen/arm: fix duplicate memory node in DT
When reserved-memory regions are present in the host device tree, dom0 is started with multiple memory nodes. Each memory node should have a unique name, but today they are all called "memory" leading to Linux printing the following warning at boot: OF: Duplicate name in base, renamed to "memory#1" This patch fixes the problem by appending a "@<unit-address>" to the name, as per the Device Tree specification, where <unit-address> matches the base of address of the first region. Fixes: 248faa637d2 (xen/arm: add reserved-memory regions to the dom0 memory node) Reported-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxx> --- Changes in v2: - fix buf size calculation: the number is 64bit and printed as hexadecimal - move check on nr_banks to a separate patch --- xen/arch/arm/domain_build.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c index ea01aada0b..3de4dafaed 100644 --- a/xen/arch/arm/domain_build.c +++ b/xen/arch/arm/domain_build.c @@ -646,6 +646,8 @@ static int __init make_memory_node(const struct domain *d, int res, i; int reg_size = addrcells + sizecells; int nr_cells = reg_size * mem->nr_banks; + /* Placeholder for memory@ + a 64-bit number + \0 */ + char buf[24]; __be32 reg[NR_MEM_BANKS * 4 /* Worst case addrcells + sizecells */]; __be32 *cells; @@ -657,7 +659,8 @@ static int __init make_memory_node(const struct domain *d, reg_size, nr_cells); /* ePAPR 3.4 */ - res = fdt_begin_node(fdt, "memory"); + snprintf(buf, sizeof(buf), "memory@%"PRIx64, mem->bank[0].start); + res = fdt_begin_node(fdt, buf); if ( res ) return res; -- 2.17.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |