[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 68/84] page_alloc: actually do the mapping and unmapping on xenheap.



Hi,

On 9/26/19 12:18 PM, hongyax@xxxxxxxxxx wrote:
On 26/09/2019 11:39, Julien Grall wrote:
Hi,

NIT Title: Please remove full stop.

On 9/26/19 10:46 AM, hongyax@xxxxxxxxxx wrote:
From: Hongyan Xia <hongyax@xxxxxxxxxx>
Please provide a description of what/why you are doing this in the 
commit message.
Also, IIRC, x86 always have !CONFIG_SEPARATE_XENHEAP. So can you 
explain why the path with separate xenheap is also modified?
Signed-off-by: Hongyan Xia <hongyax@xxxxxxxxxx>
---
  xen/common/page_alloc.c | 18 ++++++++++++++++--
  1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 7cb1bd368b..4ec6299ba8 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2143,6 +2143,7 @@ void init_xenheap_pages(paddr_t ps, paddr_t pe)
  void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
  {
      struct page_info *pg;
+    void *ret;
      ASSERT(!in_irq());
@@ -2151,7 +2152,10 @@ void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
      if ( unlikely(pg == NULL) )
          return NULL;
-    memguard_unguard_range(page_to_virt(pg), 1 << (order + PAGE_SHIFT));
+    ret = page_to_virt(pg);
+    memguard_unguard_range(ret, 1 << (order + PAGE_SHIFT));
+    map_pages_to_xen((unsigned long)ret, page_to_mfn(pg),
+                     1UL << order, PAGE_HYPERVISOR);
As mentioned earlier on for Arm, xenheap will always be mapped. So 
unless you have plan to tackle the Arm side as well, we should make 
sure that the behavior is not changed for Arm.
I can add an #ifdef for x86. Although I think if the Arm code is 
correct, this should still not break things. It breaks if a xenheap 
access is made even before allocation or after free, which I think is a 
bug.
Correctness is a matter of perspective ;). xenheap is already map on Arm 
and therefore trying to map it again is considered as an error. I think 
this is a valid behavior because if you try to map twice then it likely 
means you may have to unmap later on.
Furthermore, xenheap is using superpage (2MB, 1GB) mapping at the 
moment. We completely forbid shattering superpage because they are not 
trivial to deal with.
In short, if you wanted to unmap part it, then you would need to shatter 
the page. Shattering superpage required to follow a specific sequence 
(called break-before-make) that will go through an invalid mapping. We 
need to be careful as another processor may access the superpage at the 
same time.
It may be possible to use only 4KB mapping for the xenheap, but that's 
need to be investigated first.
Lastly, not directly related to the discussion here, I think it would be 
a good time to start checking the return of map_pages_to_xen(). If the 
call unlikely fails, we would end up to continue and get an error later 
on that may be more difficult to debug. Instead, I would fail the 
allocation if the mapping is not done.
It feels to me we want to introduce a new Kconfig that is selected by 
x86 to tell whether the direct map is mapped. I would then implement 
maybe in xen/mm.h two stub (one for when the config is selected, the 
other when it is not).
I have the same question. Do we want to move forward with no direct map in x86 or do we want to have a compile-time config? If the performance is decent, I would prefer the former since this could be a big compile-time switch which leaves two branches of code to be maintained in the future.
Unless you have plan to implement the Arm bits, we will need two 
branches to maintain.
But what I suggested is x86 to always select the option that will 
require map/unmap the direct map. Arm would keep it disable.
Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.