[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH v3 08/17] xen/arm: introduce a helper to parse device tree memory node
From: Wei Chen <wei.chen@xxxxxxx> Memory blocks' NUMA ID information is stored in device tree's memory nodes as "numa-node-id". We need a new helper to parse and verify this ID from memory nodes. Signed-off-by: Wei Chen <wei.chen@xxxxxxx> Signed-off-by: Henry Wang <Henry.Wang@xxxxxxx> --- v2 -> v3: 1. No change. v1 -> v2: 1. Move numa_disabled check to fdt_parse_numa_memory_node. 2. Use numa_bad to replace bad_srat. 3. Replace tabs by spaces. 4. Align parameters. 5. return ENODATA for a normal dtb without numa info. 6. Un-addressed comment: "Why not parse numa-node-id and call fdt_numa_memory_affinity_init from xen/arch/arm/bootfdt.c:device_tree_get_meminfo. Is it because device_tree_get_meminfo is called too early?" I checked the device_tree_get_meminfo code and I think the answer is similar as I reply in RFC. I prefer a unify numa initialization entry. Don't want to make numa parse code in different places. 7. Use node id as dummy PXM for numa_update_node_memblks. --- xen/arch/arm/numa_device_tree.c | 89 +++++++++++++++++++++++++++++++++ 1 file changed, 89 insertions(+) diff --git a/xen/arch/arm/numa_device_tree.c b/xen/arch/arm/numa_device_tree.c index 52051e4537..9796490747 100644 --- a/xen/arch/arm/numa_device_tree.c +++ b/xen/arch/arm/numa_device_tree.c @@ -34,6 +34,26 @@ static int __init fdt_numa_processor_affinity_init(nodeid_t node) return 0; } +/* Callback for parsing of the memory regions affinity */ +static int __init fdt_numa_memory_affinity_init(nodeid_t node, + paddr_t start, paddr_t size) +{ + if ( !numa_memblks_available() ) + { + dprintk(XENLOG_WARNING, + "Too many numa entry, try bigger NR_NODE_MEMBLKS\n"); + return -EINVAL; + } + + numa_fw_nid_name = "numa-node-id"; + if ( !numa_update_node_memblks(node, node, start, size, false) ) + return -EINVAL; + + device_tree_numa = DT_NUMA_ON; + + return 0; +} + /* Parse CPU NUMA node info */ static int __init fdt_parse_numa_cpu_node(const void *fdt, int node) { @@ -62,3 +82,72 @@ static int __init fdt_parse_numa_cpu_node(const void *fdt, int node) return fdt_numa_processor_affinity_init(nid); } + +/* Parse memory node NUMA info */ +static int __init fdt_parse_numa_memory_node(const void *fdt, int node, + const char *name, + unsigned int addr_cells, + unsigned int size_cells) +{ + unsigned int nid; + int ret = 0, len; + paddr_t addr, size; + const struct fdt_property *prop; + unsigned int idx, ranges; + const __be32 *addresses; + + if ( numa_disabled() ) + return -EINVAL; + + /* + * device_tree_get_u32 will return NUMA_NO_NODE when this memory + * DT node doesn't have numa-node-id. This can help us to + * distinguish a bad DTB and a normal DTB without NUMA info. + */ + nid = device_tree_get_u32(fdt, node, "numa-node-id", NUMA_NO_NODE); + if ( node == NUMA_NO_NODE ) + { + numa_fw_bad(); + return -ENODATA; + } + else if ( nid >= MAX_NUMNODES ) + { + printk(XENLOG_WARNING "Node id %u exceeds maximum value\n", nid); + goto invalid_data; + } + + prop = fdt_get_property(fdt, node, "reg", &len); + if ( !prop ) + { + printk(XENLOG_WARNING + "fdt: node `%s': missing `reg' property\n", name); + goto invalid_data; + } + + addresses = (const __be32 *)prop->data; + ranges = len / (sizeof(__be32)* (addr_cells + size_cells)); + for ( idx = 0; idx < ranges; idx++ ) + { + device_tree_get_reg(&addresses, addr_cells, size_cells, &addr, &size); + /* Skip zero size ranges */ + if ( !size ) + continue; + + ret = fdt_numa_memory_affinity_init(nid, addr, size); + if ( ret ) + goto invalid_data; + } + + if ( idx == 0 ) + { + printk(XENLOG_ERR + "bad property in memory node, idx=%d ret=%d\n", idx, ret); + goto invalid_data; + } + + return 0; + +invalid_data: + numa_fw_bad(); + return -EINVAL; +} -- 2.25.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |