[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu


On Fri, 21 Jul 2017 10:57:55 +0000
"Zhang, Xiong Y" <xiong.y.zhang@xxxxxxxxx> wrote:

> On an intel skylake machine with upstream qemu, if I add
> "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't boot
> up and continues reboot.
> Steps to reproduce this issue:
> 1)       Boot xen with iommu=1 to enable iommu
> 2)       hvm.cfg contain:
> builder="hvm"
> memory=xxxx
> disk=['win8.1 img']
> device_model_override='qemu-system-i386'
> device_model_version='qemu-xen'
> rdm="strategy=host,policy=strict"
> 3)       xl cr hvm.cfg
> Conditions to reproduce this issue:
> 1)       DomU memory size > the top address of RMRR. Otherwise, this
> issue will disappear.
> 2)       rdm=" strategy=host,policy=strict" should exist
> 3)       Windows DomU.  Linux DomU doesn't have such issue.
> 4)       Upstream qemu.  Traditional qemu doesn't have such issue.
> In this situation, hvmloader will relocate some guest ram below RMRR to
> high memory, and it seems window guest access an invalid address. Could
> someone give me some suggestions on how to debug this ?

You're likely have RMRR range(s) below 2GB boundary.

You may try the following:

1. Specify some large 'mmio_hole' value in your domain configuration file,
ex. mmio_hole=2560
2. If it won't help, 'xl dmesg' output might come useful

Right now upstream QEMU still doesn't support relocation of parts
of guest RAM to >4GB boundary if they were overlapped by MMIO ranges.
AFAIR forcing allow_memory_relocate to 1 for hvmloader didn't bring anything
good for HVM guest.

Setting the mmio_hole size manually allows to create a "predefined"
memory/MMIO hole layout for both QEMU (via 'max-ram-below-4g') and
hvmloader (via a XenStore param), effectively avoiding MMIO/RMRR overlaps
or RAM relocation in hvmloader, so this might help.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.