[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0 of 7] Synchronized p2m lookups v2



Changes from v1 posted Feb 2nd 2012
<Mostly Acked-by and comments>

- patch 1:
 + Clarified comments in header regarding shadow mode
 + Clarified error checking in hap_update_paging_modes
 + Updated comment about lockng discipline between p2m and page sharing

- patch 2:
 + Fixed one instance of gfn_unlock when it should have been p2m_unlock
 + Added Acked-by

- patch 4:
 + Added comments explaining new argument to p2m_mem_access_check
 + Added Acked-by

- patch 5:
 + Added Acked-by

- Added patch 6 to the series, previously posted separately as "x86/mm:
  Refactor possibly deadlocking get_gfn calls"
 + Simplified logic of get_two_gfns
 + Made get_two_gfns struct stack vars rather than dynamically allocated

- Added patch 7 to the series, previously posted separately as "x86/mm: When
  removing/adding a page from/to the physmap, keep in mind it could be shared"
 + Added a missing p2m unlock

Description from original post follows:

Up until now, p2m lookups (get_gfn*) in the x86 architecture were not
synchronized against concurrent updates. With the addition of sharing and
paging subsystems that can dynamically modify the p2m, the need for
synchronized lookups becomes more pressing.

Without synchronized lookups, we've encountered host crashes triggered by race
conditions, particularly when exercising the sharing subsystem.

In this patch series we make p2m lookups fully synchronized. We still use a
single per-domain p2m lock (profiling work tbd may or may not indicate the need
for fine-grained locking).

We only enforce locking of p2m queries for hap-assisted domains. These are the
domains (in the case of EPT) that can exercise the paging and sharing
subsystems and thus most in need of synchronized lookups.

While the code *should* work for domains relying on shadow paging, it has not
been tested, hence we do not enable this at the moment.

Finally, the locking discipline in the PoD subsystem has been updated, since
PoD relied on some assumptions about the way the p2m lock is used.

Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
Signed-off-by: Adin Scannell <adin@xxxxxxxxxxx>

 xen/arch/x86/mm/hap/hap.c        |    8 ++
 xen/arch/x86/mm/mm-locks.h       |   40 ++++++++-----
 xen/arch/x86/mm/p2m.c            |   21 ++++++-
 xen/include/asm-x86/mm.h         |    6 +-
 xen/include/asm-x86/p2m.h        |   47 ++++++++++------
 xen/arch/x86/mm/mem_sharing.c    |    2 -
 xen/arch/x86/mm/mm-locks.h       |    4 +-
 xen/arch/x86/mm/p2m-ept.c        |   33 +----------
 xen/arch/x86/mm/p2m-pod.c        |   14 +++-
 xen/arch/x86/mm/p2m-pt.c         |   45 ++-------------
 xen/arch/x86/mm/p2m.c            |   62 +++++++++++----------
 xen/arch/x86/mm/mm-locks.h       |   10 +++
 xen/arch/x86/mm/p2m-pod.c        |  110 +++++++++++++++++++++++---------------
 xen/arch/x86/mm/p2m-pt.c         |    1 +
 xen/arch/x86/mm/p2m.c            |    8 ++-
 xen/include/asm-x86/p2m.h        |   27 ++------
 xen/arch/x86/hvm/emulate.c       |    2 +-
 xen/arch/x86/hvm/hvm.c           |   25 ++++++--
 xen/arch/x86/mm.c                |   10 +-
 xen/arch/x86/mm/guest_walk.c     |    2 +-
 xen/arch/x86/mm/hap/guest_walk.c |    6 +-
 xen/arch/x86/mm/mem_access.c     |   10 +++
 xen/arch/x86/mm/mem_event.c      |    5 +
 xen/arch/x86/mm/p2m.c            |   52 ++++++++++--------
 xen/common/grant_table.c         |    2 +-
 xen/common/memory.c              |    2 +-
 xen/include/asm-x86/mem_access.h |    1 +
 xen/include/asm-x86/mem_event.h  |    3 +
 xen/include/asm-x86/p2m.h        |   10 ++-
 xen/arch/x86/mm/p2m.c            |    4 +-
 xen/arch/x86/hvm/emulate.c       |   33 ++++------
 xen/arch/x86/mm/mem_sharing.c    |   24 +++----
 xen/include/asm-x86/p2m.h        |   60 +++++++++++++++++++++
 xen/arch/x86/mm/p2m.c            |   26 ++++++++-
 34 files changed, 427 insertions(+), 288 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.