[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] mm/page_alloc: always scrub pages given to the allocator



Hi,

On 10/01/2018 10:58 AM, Sergey Dyasli wrote:
Having the allocator return unscrubbed pages is a potential security
concern: some domain can be given pages with memory contents of another
domain. This may happen, for example, if a domain voluntarily releases
its own memory (ballooning being the easiest way for doing this).

Based on the comment you dropped below, I would have thought the guest is responsible for scrubbing page it gives back using ballooning. Did I miss anything?


Change the allocator to always scrub the pages given to it by:

1. free_xenheap_pages()
2. free_domheap_pages()
3. online_page()
4. init_heap_pages()

Performance testing has shown that on multi-node machines bootscrub
vastly outperforms idle-loop scrubbing. So instead of marking all pages
dirty initially, introduce bootscrub_done to track the completion of
the process and eagerly scrub all allocated pages during boot.

If bootscrub is disabled, then all pages will be marked as dirty right
away and scrubbed either in idle-loop or eagerly during allocation.

After this patch, alloc_heap_pages() is guaranteed to return scrubbed
pages to a caller unless MEMF_no_scrub flag was provided.

Signed-off-by: Sergey Dyasli <sergey.dyasli@xxxxxxxxxx>
---
CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
CC: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
CC: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
CC: Jan Beulich <jbeulich@xxxxxxxx>
CC: Julien Grall <julien.grall@xxxxxxx>
CC: Tim Deegan <tim@xxxxxxx>
---
  docs/misc/xen-command-line.markdown |  3 ++-
  xen/common/page_alloc.c             | 29 ++++++++++++++---------------
  2 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/docs/misc/xen-command-line.markdown 
b/docs/misc/xen-command-line.markdown
index 1ffd586224..d9bebf4e4b 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -233,7 +233,8 @@ Xen's command line.
Scrub free RAM during boot. This is a safety feature to prevent
  accidentally leaking sensitive VM data into other VMs if Xen crashes
-and reboots.
+and reboots. Note: even if disabled, RAM will still be scrubbed in
+background.
Some of the user may want to skip boot scrub because they don't need it or for testing as it is too slow on some platform (i.e models).

I think we still need to give an option to those users. If they disabled boot scrub, then they now the security risk based on the description of the option.

### bootscrub\_chunk
  > `= <size>`
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 16e1b0c357..cb1c265f9c 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -161,8 +161,9 @@ string_param("badpage", opt_badpage);
  /*
   * no-bootscrub -> Free pages are not zeroed during boot.
   */
-static bool_t opt_bootscrub __initdata = 1;
+static bool __read_mostly opt_bootscrub = true;
  boolean_param("bootscrub", opt_bootscrub);
+static bool __read_mostly bootscrub_done;
/*
   * bootscrub_chunk -> Amount of bytes to scrub lockstep on non-SMT CPUs
@@ -1026,6 +1027,12 @@ static struct page_info *alloc_heap_pages(
          }
      }
+ if ( unlikely(opt_bootscrub && !bootscrub_done) )
+    {
+        for ( i = 0; i < (1U << order); i++ )
+            scrub_one_page(&pg[i]);
+    }
+
      if ( need_tlbflush )
          filtered_flush_tlb_mask(tlbflush_timestamp);
@@ -1684,7 +1691,7 @@ unsigned int online_page(unsigned long mfn, uint32_t *status)
      spin_unlock(&heap_lock);
if ( (y & PGC_state) == PGC_state_offlined )
-        free_heap_pages(pg, 0, false);
+        free_heap_pages(pg, 0, true);
return ret;
  }
@@ -1763,7 +1770,8 @@ static void init_heap_pages(
              nr_pages -= n;
          }
- free_heap_pages(pg + i, 0, scrub_debug);
+        free_heap_pages(pg + i, 0, scrub_debug ||
+                                   (opt_bootscrub ? bootscrub_done : true));
      }
  }
@@ -2024,6 +2032,7 @@ static void __init scrub_heap_pages(void)
          }
      }
+ bootscrub_done = true;
      printk("done.\n");
#ifdef CONFIG_SCRUB_DEBUG
@@ -2098,7 +2107,7 @@ void free_xenheap_pages(void *v, unsigned int order)
memguard_guard_range(v, 1 << (order + PAGE_SHIFT)); - free_heap_pages(virt_to_page(v), order, false);
+    free_heap_pages(virt_to_page(v), order, true);
  }
#else
@@ -2293,8 +2302,6 @@ void free_domheap_pages(struct page_info *pg, unsigned 
int order)
      }
      else
      {
-        bool_t scrub;
-
          if ( likely(d) && likely(d != dom_cow) )
          {
              /* NB. May recursively lock from relinquish_memory(). */
@@ -2309,13 +2316,6 @@ void free_domheap_pages(struct page_info *pg, unsigned 
int order)
              drop_dom_ref = !domain_adjust_tot_pages(d, -(1 << order));
spin_unlock_recursive(&d->page_alloc_lock);
-
-            /*
-             * Normally we expect a domain to clear pages before freeing them,
-             * if it cares about the secrecy of their contents. However, after
-             * a domain has died we assume responsibility for erasure.
-             */
-            scrub = d->is_dying || scrub_debug;
          }
          else
          {
@@ -2328,10 +2328,9 @@ void free_domheap_pages(struct page_info *pg, unsigned 
int order)
               */
              ASSERT(!d || !order);
              drop_dom_ref = 0;
-            scrub = 1;
          }
- free_heap_pages(pg, order, scrub);
+        free_heap_pages(pg, order, true);
      }
if ( drop_dom_ref )


Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.