[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 0/5] Implement the XEN_DOMCTL_memory_mapping hypercall for ARM



Hello,

this patchset is my third attempt at proposing an implementation of the
XEN_DOMCTL_memory_mapping hypercall for the ARM architecture. I optimistically
removed the RFC tag here, I hope this was the right thing to do at this stage.

The first attempt of implementation ([1]) originated directly from my thesis
project, which I am still developing with Paolo Valente ([2]). For my master
thesis, in fact, I am porting an automotive-grade real-time OS (Evidence's
ERIKA Enterprise, [3], [4]) on Xen on ARM; the OS, which is currently being
ported as a domU, needs to access peripherals performing memory-mapped I/O,
and therefore needs the related I/O-memory ranges to be made accessible to it.
As I was working on the first implementation of the DOMCTL after reading a
related xen-devel discussion ([5]), I did not know that Eric Trudeau had already
proposed a similar possible implementation of the hypercall ([6], [7]), and I
afterwards had the benefit of both the feedback given to him and his feedback.

The v2 patchset ([8]), in fact, tried to address some issues pointed out mainly
by Julien Grall and Eric Trudeau, and further discussed with Dario Faggioli.
Such issues concerned correct typing of variables and missing page-alignment
of addresses; v1 also was implemented under the wrong assumption that dom0
could access any range of memory addresses as I/O memory, while v2 instead
added to dom0's iomem_caps the I/O-memory ranges related to devices exposed to
it; finally, the v1 patchset added the new DOMCTL as ARM32-specific, while it
is not.

The first commit in this v3 patchset implements the correct bookkeeping
to make a domain's iomem_caps coherent with the memory-mapping operations
performed during the domain's creation. With respect to v2, it correctly
prints the domain's ID instead of assuming that the domain is privileged,
as pointed out by Julien Grall; it also features a more accurate commit
description than the one given with v2, according to the feedback given by
Ian Campbell.

The second commit adds to the apply_p2m_changes() function some consistency
checks useful for the REMOVE operation. In fact, it may happen that the new
DOMCTL to be introduced is invoked with wrong parameters. More in detail, the
checks added with this commit cover the following two cases, as suggested by
Julien Grall:
. an unmap operation which is meant to be performed for I/O memory must involve
  a gfn mapping which is typed p2m_mmio_direct;
. the gfn given as parameter must be mapped to the mfn given as parameter.

The third commit moves the x86 implementation of the XEN_DOMCTL_memory_mapping
hypercall to xen/common/domctl.c and attempts to adapt the code to be used for
both the x86 and ARM architectures.
As of this commit, with respect to the corresponding previous one, I have been
trying to address the following issues.
. The code has been made common to the ARM and x86 architecture, abstracting
  out the different operations used to map and unmap I/O memory. To this
  purpose, a new pair of functions (map_mmio_regions(), unmap_mmio_regions()
  have been added to the x86 p2m handling library. Also, a new paddr_bits
  variable for ARM has been added.
  The v2 implementation added the new code to xen/arch/arm/domctl.c, while
  that code was almost a complete copy of the already-existing x86 one, as
  pointed out by Ian Campbell and Jan Beulich.
. The unmap_mmio_regions() function now takes pfns as parameters, as requested
  by Julien Grall.
. The end mfn and gfn are now computed only once in variables local to the
  DOMCTL's memory_mapping block, as suggested by Ian Campbell.

The fourth commit attempts to change the libxl code parsing the "iomem"
configuration option in an attempt to comply with a suggestion given by Ian
Campbell, approved by Dario Faggioli and further discussed with Julien Grall
and Stefano Stabellini.
More in detail, it extends the libxl code to parse an additional, optional
parameter to the "iomem" domU configuration option. This allows the user to
specify the start guest address to map the machine I/O-memory address range to;
in the absence of an explicit specification, the code defaults to the solution
implemented with this patchset's v1, that is a 1:1 mapping.

The fifth commit adds to libxl code to handle the iomem configuration option by
invoking the memory_mapping DOMCTL. The code has been ifdef'd to be available
only in case of ARM or x86 HVM guests, as indicated by Ian Campbell, with a
whitelist approach, as suggested by Jan Beulich.

The code has been tested with Linux v3.13 as dom0 and ERIKA as domU.
Again, any feedback about this version of the patches is more than welcome.

Arianna

Arianna Avanzini (5):
  arch, arm: domain build: allow access to I/O memory of mapped devices
  arch, arm: add consistency checks to REMOVE p2m changes
  xen, common: add the XEN_DOMCTL_memory_mapping hypercall
  tools, libxl: parse optional start gfn from the iomem config option
  tools, libxl: handle the iomem parameter with the memory_mapping hcall

 docs/man/xl.cfg.pod.5           |  7 +++--
 tools/libxl/libxl_create.c      | 17 ++++++++++
 tools/libxl/libxl_types.idl     |  1 +
 tools/libxl/xl_cmdimpl.c        | 22 ++++++++-----
 xen/arch/arm/cpu.c              |  2 ++
 xen/arch/arm/domain_build.c     | 11 +++++++
 xen/arch/arm/p2m.c              | 45 ++++++++++++++++++++++++--
 xen/arch/x86/domctl.c           | 70 -----------------------------------------
 xen/arch/x86/mm/p2m.c           | 39 +++++++++++++++++++++++
 xen/common/domctl.c             | 69 ++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/p2m.h       |  4 +++
 xen/include/asm-arm/processor.h |  2 ++
 xen/include/asm-x86/p2m.h       | 10 ++++++
 13 files changed, 217 insertions(+), 82 deletions(-)

-- 
1.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.