[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH v2 6/9] xen/x86: move NUMA scan nodes codes from x86 to common
Hi Jan, > -----Original Message----- > From: Jan Beulich <jbeulich@xxxxxxxx> > Sent: 2022年7月12日 22:21 > To: Wei Chen <Wei.Chen@xxxxxxx> > Cc: nd <nd@xxxxxxx>; Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Roger Pau > Monné <roger.pau@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; George Dunlap > <george.dunlap@xxxxxxxxxx>; Julien Grall <julien@xxxxxxx>; Stefano > Stabellini <sstabellini@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx > Subject: Re: [PATCH v2 6/9] xen/x86: move NUMA scan nodes codes from x86 > to common > > On 08.07.2022 16:54, Wei Chen wrote: > > x86 has implemented a set of codes to scan NUMA nodes. These > > codes will parse NUMA memory and processor information from > > ACPI SRAT table. But except some ACPI specific codes, most > > of the scan codes like memory blocks validation, node memory > > range updates and some sanity check can be reused by other > > NUMA implementation. > > > > So in this patch, we move some variables and related functions > > for NUMA memory and processor to common code. At the same time, > > numa_set_processor_nodes_parsed has been introduced for ACPI > > specific code to update processor parsing results. With this > > helper, we can move most of NUMA memory affinity init code from > > ACPI. And bad_srat and node_to_pxm functions have been exported > > for common code to do architectural fallback and node to proximity > > converting. > > I consider it wrong for generic (ACPI-independent) code to use > terms like "srat" or "pxm". This wants abstracting in some way, > albeit I have to admit I lack a good idea for a suggestion right > now. > Maybe we can use fw_rsc_table or rsc_table to replace srat, because srat is one kind of NUMA resource description table of ACPI? For PXM, I had tried to keep PXM in x86 ACPI implementation. But the cost is that, we have to move some common code to architectural code, because some messages use pxm for info, and they have different meanings for each platform, we cannot simply remove them. > > For those NUMA implementations haven't supported > > proximity domain to node map. A simple 1-1 mapping helper can help > > us to avoid the modification of some very valuable print messages. > > I don't think a simple 1:1 mapping can do. Surely firmware > represents nodes by some identifiers everywhere. And I don't > consider it very likely that we would want to blindly re-use > such numbering inside Xen. > If we hide pxm to x86 only now, we can avoid this 1:1 mapping between Node and PXM. > > --- a/xen/common/numa.c > > +++ b/xen/common/numa.c > > @@ -14,6 +14,21 @@ > > #include <xen/softirq.h> > > #include <asm/acpi.h> > > > > +static nodemask_t __initdata processor_nodes_parsed; > > +static nodemask_t __initdata memory_nodes_parsed; > > +static struct node __initdata nodes[MAX_NUMNODES]; > > + > > +static int num_node_memblks; > > +static struct node node_memblk_range[NR_NODE_MEMBLKS]; > > +static nodeid_t memblk_nodeid[NR_NODE_MEMBLKS]; > > Some (all) of these likely want to become __read_mostly again, at > this occasion. > Ok. > > @@ -32,6 +47,298 @@ nodemask_t __read_mostly node_online_map = { { [0] = > 1UL } }; > > > > enum numa_mode numa_status; > > > > +void __init numa_set_processor_nodes_parsed(nodeid_t node) > > +{ > > + node_set(node, processor_nodes_parsed); > > +} > > + > > +int valid_numa_range(paddr_t start, paddr_t end, nodeid_t node) > > +{ > > + int i; > > unsigned int? Then matching e.g. ... > Ok. > > +static > > +enum conflicts __init conflicting_memblks(nodeid_t nid, paddr_t start, > > + paddr_t end, paddr_t nd_start, > > + paddr_t nd_end, unsigned int > *mblkid) > > +{ > > + unsigned int i; > > ... this. But perhaps also elsewhere. > I will check the code an fix them. Cheers, Wei Chen > Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |