[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu



On 24/07/17 09:07, Zhang, Xiong Y wrote:
>>> On Fri, 21 Jul 2017 10:57:55 +0000
>>> "Zhang, Xiong Y" <xiong.y.zhang@xxxxxxxxx> wrote:
>>>
>>>> On an intel skylake machine with upstream qemu, if I add
>>>> "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't
>>>> boot up and continues reboot.
>>>>
>>>> Steps to reproduce this issue:
>>>>
>>>> 1)       Boot xen with iommu=1 to enable iommu
>>>> 2)       hvm.cfg contain:
>>>>
>>>> builder="hvm"
>>>>
>>>> memory=xxxx
>>>>
>>>> disk=['win8.1 img']
>>>>
>>>> device_model_override='qemu-system-i386'
>>>>
>>>> device_model_version='qemu-xen'
>>>>
>>>> rdm="strategy=host,policy=strict"
>>>>
>>>> 3)       xl cr hvm.cfg
>>>>
>>>> Conditions to reproduce this issue:
>>>>
>>>> 1)       DomU memory size > the top address of RMRR. Otherwise, this
>>>> issue will disappear.
>>>> 2)       rdm=" strategy=host,policy=strict" should exist
>>>> 3)       Windows DomU.  Linux DomU doesn't have such issue.
>>>> 4)       Upstream qemu.  Traditional qemu doesn't have such issue.
>>>>
>>>> In this situation, hvmloader will relocate some guest ram below RMRR to
>>>> high memory, and it seems window guest access an invalid address. Could
>>>> someone give me some suggestions on how to debug this ?
>>>
>>> You're likely have RMRR range(s) below 2GB boundary.
>>>
>>> You may try the following:
>>>
>>> 1. Specify some large 'mmio_hole' value in your domain configuration file,
>>> ex. mmio_hole=2560
>>> 2. If it won't help, 'xl dmesg' output might come useful
>>>
>>> Right now upstream QEMU still doesn't support relocation of parts
>>> of guest RAM to >4GB boundary if they were overlapped by MMIO ranges.
>>> AFAIR forcing allow_memory_relocate to 1 for hvmloader didn't bring
>>> anything good for HVM guest.
>>>
>>> Setting the mmio_hole size manually allows to create a "predefined"
>>> memory/MMIO hole layout for both QEMU (via 'max-ram-below-4g') and
>>> hvmloader (via a XenStore param), effectively avoiding MMIO/RMRR
>> overlaps
>>> or RAM relocation in hvmloader, so this might help.
>>
>> Wrote too soon, "policy=strict" means that you won't be able to create a
>> DomU if RMRR was below 2G... so it's actually should be above 2GB. Anyway,
>> try setting mmio_hole size.
> [Zhang, Xiong Y] Thanks for your suggestion.
> Indeed, if I set mmi_hole >= 4G - RMRR_Base, this could fix my issue.
> For this I still have two questions, could you help me ?
> 1) If hvmloader do low memory relocation, hvmloader and qemu will see a 
> different guest memory layout . So qemu ram maybe overlop with mmio, does xen 
> have plan to fix this ?
> 

hvmloader doesn't do memory relocation - this ability is turned off by
default. The reason for the issue is that libxl initially sets the size
of lower MMIO hole (based on the RMRR regions present and their size)
and doesn't communicate it to QEMU using 'max-ram-below-4g' argument.

When you set 'mmio_hole' size parameter you basically forces libxl to
pass this argument to QEMU.

That means the proper fix would be to make libxl to pass this argument
to QEMU in case there are RMRR regions present.

Igor

> 2) Just now, I did an experiment: In hvmloader, I set HVM_BELOW_4G_RAM_END to 
> 3G and reserve one area for qemu_ram_allocate like 0xF0000000 ~ 0xFC000000; 
> In Qemu, I modified xen_ram_alloc() to make sure it only allocate gfn in 
> 0xF0000000 ~ 0xFC000000. In this case qemu_ram won't overlap with mmio, but 
> this workaround couldn't fix my issue.
>  It seems qemu still has another interface to allocate gfn except 
> xen_ram_alloc(), do you know this interface ?
> 
> thanks
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.