|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH 04/22] xen/numa: vmap the pages for memnodemap
From: Hongyan Xia <hongyxia@xxxxxxxxxx>
This avoids the assumption that there is a direct map and boot pages
fall inside the direct map.
Clean up the variables so that mfn actually stores a type-safe mfn.
Signed-off-by: Hongyan Xia <hongyxia@xxxxxxxxxx>
Signed-off-by: Julien Grall <jgrall@xxxxxxxxxx>
----
Changes compare to Hongyan's version:
* The function modified was moved to common code. So rebase it
* vmap_boot_pages() was renamed to vmap_contig_pages()
---
xen/common/numa.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/xen/common/numa.c b/xen/common/numa.c
index 4948b21fbe66..2040b3d974e5 100644
--- a/xen/common/numa.c
+++ b/xen/common/numa.c
@@ -424,13 +424,13 @@ static int __init populate_memnodemap(const struct node
*nodes,
static int __init allocate_cachealigned_memnodemap(void)
{
unsigned long size = PFN_UP(memnodemapsize * sizeof(*memnodemap));
- unsigned long mfn = mfn_x(alloc_boot_pages(size, 1));
+ mfn_t mfn = alloc_boot_pages(size, 1);
- memnodemap = mfn_to_virt(mfn);
- mfn <<= PAGE_SHIFT;
+ memnodemap = vmap_contig_pages(mfn, size);
+ BUG_ON(!memnodemap);
size <<= PAGE_SHIFT;
printk(KERN_DEBUG "NUMA: Allocated memnodemap from %lx - %lx\n",
- mfn, mfn + size);
+ mfn_to_maddr(mfn), mfn_to_maddr(mfn) + size);
memnodemapsize = size / sizeof(*memnodemap);
return 0;
--
2.38.1
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |