[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH for 4.5 v8 0/1] Add mmio_hole (was Add mmio_hole_size, Add pci_hole_min_size)



Changes from v7 to v8:
  Ian Campbell
    Use MBYTES in xl.cfg
      Done.
    Use LOG().
      Done.
    Please make this a MemKB at the libxl interface level.
      Done -- s/mmio_hole_size/mmio_hole_memkb/ in tools/libxl/*
        s/mmio_hole_size/mmio_hole/ in docs/man/xl.cfg.pod.5 and
        tools/libxl/xl_cmdimpl.c
    Please drop Konrad's comment.
      Done.

  George Dunlap
    Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
      Did not add do changes to many parts.
      No change to tools/firmware/* so I think it still
        applies there.

  Konrad Rzeszutek Wilk
    I am OK with this patch going in Xen 4.5 (as a release-manager).
      I think this is still applies.


Changes from v6 to v7:
  Konrad Rzeszutek Wilk:
    QEMU 2.1.0 reference confusing.
      Dropped.
    Why have assert(i < 9) & assert(i < 11)
      Dropped.
    This check for 256MB is surely obfuscated by the macros.
      Added comment to help.


Changes from v5 to v6:
  rebased.

  George Dunlap:
    I think we should get rid of xc_hvm_build_with_hole() entirely.
      Done.

  Boris Ostrovsky:
    Add text describing restrictions on hole size
      Done.
    "<=", to be consistent with the check in
        libxl__domain_create_info_setdefault ()
      Done

Changes from v4 to v5:
  Add LIBXL_HAVE_BUILDINFO_HVM_MMIO_HOLE_SIZE.

Changes from v3 to v4:

  Jan Beulich:
    s/pci_hole_min_size/mmio_hole_size/
    s/HVM_BELOW_4G_RAM_END/HVM_BELOW_4G_MMIO_START/
    Add ignore to hvmloader message
    Adjust commit message    
    Dropped min; stopped mmio_hole_size changing in hvmloader

Open question: Should I switch this to max_ram_below_4g or stay with
pci_hole_min_size?

This may help or fix http://bugs.xenproject.org/xen/bug/28

Changes from v2 to v3:
  Dropped RFC since QEMU change will be in QEMU 2.1.0 release.

  Dropped "if (dom_info.debug)" in parse_config_data()
  since dom_info no long passed in.

  Lots more bounds checking.  Since QEMU is a min(qemu_limit,
  max_ram_below_4g) adjust code the do the same.

  Boris Ostrovsky:
    More in commit message.


Changes from v1 to v2:
  Boris Ostrovsky:
    Need to init pci_hole_min_size

  Changed the qemu name from pci_hole_min_size to
  pc-memory-layout.max-ram-below-4g.

  Did not change the Xen version for now.

  Added quick note to xl.cfg.pod.5

  Added a check for a too big value. (Most likely in the wrong
  place.)

If you add enough PCI devices then all mmio may not fit below 4G
which may not be the layout the user wanted. This allows you to
increase the below 4G address space that PCI devices can use and
therefore in more cases not have any mmio that is above 4G.

There are real PCI cards that do not support mmio over 4G, so if you
want to emulate them precisely, you may also need to increase the
space below 4G for them. There are drivers for these cards that also
do not work if they have their mmio space mapped above 4G.

This is posted as an RFC because you need the upstream version of
qemu with all 4 patches:

http://marc.info/?l=qemu-devel&m=139455360016654&w=2

This allows growing the pci_hole to the size needed.

This may help with using pci passthru and HVM. 


Don Slutz (1):
  Add mmio_hole

 docs/man/xl.cfg.pod.5             | 15 +++++++++
 tools/firmware/hvmloader/config.h |  1 -
 tools/firmware/hvmloader/pci.c    | 71 +++++++++++++++++++++++++++------------
 tools/libxl/libxl.h               |  5 +++
 tools/libxl/libxl_create.c        | 29 ++++++++++++----
 tools/libxl/libxl_dm.c            | 20 +++++++++--
 tools/libxl/libxl_dom.c           |  8 +++++
 tools/libxl/libxl_types.idl       |  1 +
 tools/libxl/xl_cmdimpl.c          | 13 +++++++
 9 files changed, 132 insertions(+), 31 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.