[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu



> On 24/07/17 09:07, Zhang, Xiong Y wrote:
> >>> On Fri, 21 Jul 2017 10:57:55 +0000
> >>> "Zhang, Xiong Y" <xiong.y.zhang@xxxxxxxxx> wrote:
> >>>
> >>>> On an intel skylake machine with upstream qemu, if I add
> >>>> "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't
> >>>> boot up and continues reboot.
> >>>>
> >>>> Steps to reproduce this issue:
> >>>>
> >>>> 1)       Boot xen with iommu=1 to enable iommu
> >>>> 2)       hvm.cfg contain:
> >>>>
> >>>> builder="hvm"
> >>>>
> >>>> memory=xxxx
> >>>>
> >>>> disk=['win8.1 img']
> >>>>
> >>>> device_model_override='qemu-system-i386'
> >>>>
> >>>> device_model_version='qemu-xen'
> >>>>
> >>>> rdm="strategy=host,policy=strict"
> >>>>
> >>>> 3)       xl cr hvm.cfg
> >>>>
> >>>> Conditions to reproduce this issue:
> >>>>
> >>>> 1)       DomU memory size > the top address of RMRR. Otherwise, this
> >>>> issue will disappear.
> >>>> 2)       rdm=" strategy=host,policy=strict" should exist
> >>>> 3)       Windows DomU.  Linux DomU doesn't have such issue.
> >>>> 4)       Upstream qemu.  Traditional qemu doesn't have such issue.
> >>>>
> >>>> In this situation, hvmloader will relocate some guest ram below RMRR to
> >>>> high memory, and it seems window guest access an invalid address.
> Could
> >>>> someone give me some suggestions on how to debug this ?
> >>>
> >>> You're likely have RMRR range(s) below 2GB boundary.
> >>>
> >>> You may try the following:
> >>>
> >>> 1. Specify some large 'mmio_hole' value in your domain configuration file,
> >>> ex. mmio_hole=2560
> >>> 2. If it won't help, 'xl dmesg' output might come useful
> >>>
> >>> Right now upstream QEMU still doesn't support relocation of parts
> >>> of guest RAM to >4GB boundary if they were overlapped by MMIO ranges.
> >>> AFAIR forcing allow_memory_relocate to 1 for hvmloader didn't bring
> >>> anything good for HVM guest.
> >>>
> >>> Setting the mmio_hole size manually allows to create a "predefined"
> >>> memory/MMIO hole layout for both QEMU (via 'max-ram-below-4g') and
> >>> hvmloader (via a XenStore param), effectively avoiding MMIO/RMRR
> >> overlaps
> >>> or RAM relocation in hvmloader, so this might help.
> >>
> >> Wrote too soon, "policy=strict" means that you won't be able to create a
> >> DomU if RMRR was below 2G... so it's actually should be above 2GB.
> Anyway,
> >> try setting mmio_hole size.
> > [Zhang, Xiong Y] Thanks for your suggestion.
> > Indeed, if I set mmi_hole >= 4G - RMRR_Base, this could fix my issue.
> > For this I still have two questions, could you help me ?
> > 1) If hvmloader do low memory relocation, hvmloader and qemu will see a
> different guest memory layout . So qemu ram maybe overlop with mmio, does
> xen have plan to fix this ?
> >
> 
> hvmloader doesn't do memory relocation - this ability is turned off by
> default. The reason for the issue is that libxl initially sets the size
> of lower MMIO hole (based on the RMRR regions present and their size)
> and doesn't communicate it to QEMU using 'max-ram-below-4g' argument.
> 
> When you set 'mmio_hole' size parameter you basically forces libxl to
> pass this argument to QEMU.
> 
> That means the proper fix would be to make libxl to pass this argument
> to QEMU in case there are RMRR regions present.
[Zhang, Xiong Y] thanks for your clarification, I will try this solution.

What I said memory relocation is: both qemu and hvmloader think the default 
pci_mem_start is 0xF000000, while hvmloader will decrease pci_mem_start to 3G 
or 2G when mmio_total is big , but hvmloader's modification to pci_mem_start 
doesn't notify to Qemu, and I met another two issues for this in my igd 
passthrough environment:
(1): If guest ram is 2G, hvmloader's pci_mem_start is 2G; Qemu will allocate 
gfn in xen_ram_alloc() above 2G, so qemu's ram overlop with mmio.
(2): If guest ram >= 4G, hvmloader's pci_mem_start < 0xF0000000; Qemu will 
declare all the below 4G gfn as guest ram, When hvmloader set one device's bar 
base address < 0xF0000000, memory listener's callback in qemu for this bar 
couldn't be signaled as this bar's range has already been covered by qemu's 
guest ram.

Although the above two issues could be workaround by setting a big enough 
mmio_hole parameter, it is better xen has another fix.

Reference a comment from tools/firmware/hvmloader/pci.c:
        /*
         * At the moment qemu-xen can't deal with relocated memory regions.
         * It's too close to the release to make a proper fix; for now,
         * only allow the MMIO hole to grow large enough to move guest memory
         * if we're running qemu-traditional.  Items that don't fit will be
         * relocated into the 64-bit address space.

thanks
> 
> Igor
> 
> > 2) Just now, I did an experiment: In hvmloader, I set
> HVM_BELOW_4G_RAM_END to 3G and reserve one area for
> qemu_ram_allocate like 0xF0000000 ~ 0xFC000000; In Qemu, I modified
> xen_ram_alloc() to make sure it only allocate gfn in 0xF0000000 ~ 0xFC000000.
> In this case qemu_ram won't overlap with mmio, but this workaround couldn't
> fix my issue.
> >  It seems qemu still has another interface to allocate gfn except
> xen_ram_alloc(), do you know this interface ?
> >
> > thanks
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > https://lists.xen.org/xen-devel
> >
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.