[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] xen + ovmf + pci passthrough possible?



Thank you John for your answer! Kind of managed to miss them. I'll take
a look at it this weekend.

many greetings
On 13.04.2018 03:10, John Thomson wrote:
> On Sat, 7 Apr 2018, at 00:16, tseegerkrb wrote:
>> a short question about the efi bios and pci passthrough. It is possible
>> to passthrough a pci device to a domu with ovmf bios?
> Hi, I have been looking into this recently as well.
>
> I did get it to work in a very simple setup.
> USB ports on motherboard passed through and working in OVMF linux guest.
>
> Environment:
> - i7-2600
> - motherboard USB ports PCI device passthrough
> - ArchLinux ISO for guest
> - Xen staging 641f9ce2fa, with more debug messages here and there
>
> I have found:
> - My test PCI device requires relaxed mode:
>   xl dmesg:
>   [VT-D] It's disallowed to assign 0000:00:1d.0 with shared RMRR at bdbb8000 
> for Dom13.
>
> - domU xl.cfg rdm="strategy=host,policy=relaxed"
>   requires significantly more guest memory (+2.1GB) for OVMF than Seabios:
>   xl dmesg:
>   (d20) [2018-04-05 05:29:03] ovmf_load mem_hole_populate_ram ffec0
>   (d20) [2018-04-05 05:29:03] HVMLoader mem_hole_populate_ram started
>   (d20) [2018-04-05 05:29:03] mfn: ffec0
>   (d20) [2018-04-05 05:29:03] not over_allocated
>   (XEN) [2018-04-05 05:29:03] d20v0 Over-allocation for domain 20: 256257 > 
> 256256
>   (XEN) [2018-04-05 05:29:03] memory.c:242:d20v0 Could not allocate order=0 
> extent: id=20 memflags=0 (0 of 1)
>   (XEN) [2018-04-05 05:29:03] memory.c:244:d20v0 mfn 0x100000004
>   (d20) [2018-04-05 05:29:03] hvm_info->high_mem_pgend
>   (XEN) [2018-04-05 05:29:03] 
> /build/xen-git/src/xen/xen/include/asm/mm.h:384:d20v0 Could not get page ref 
> for mfn ffffffffffffffff
>   (d20) [2018-04-05 05:29:03] *** HVMLoader bug at util.c:442
>   (d20) [2018-04-05 05:29:03] *** HVMLoader crashed.
>
>   The Over-allocation number here (256256) = hex 0x3e900
>   is very similar to the xl -vvv create xc_dom_mem_init: 0x3e000 pages
>
>   xl -vvv create
>   domainbuilder: detail: loader probe OK
>   xc: detail: ELF: phdr: paddr=0x100000 memsz=0x8cc84
>   xc: detail: ELF: memory: 0x100000 -> 0x18cc84
>   domainbuilder: detail: xc_dom_mem_init: mem 992 MB, pages 0x3e000 pages, 4k 
> each
>   domainbuilder: detail: xc_dom_mem_init: 0x3e000 pages
>   domainbuilder: detail: xc_dom_boot_mem_init: called
>   domainbuilder: detail: range: start=0x0 end=0x3e000000
>   domainbuilder: detail: xc_dom_malloc            : 1984 kB
>   xc: detail: PHYSICAL MEMORY ALLOCATION:
>   xc: detail:   4KB PAGES: 0x0000000000000200
>   xc: detail:   2MB PAGES: 0x00000000000001ef
>   xc: detail:   1GB PAGES: 0x0000000000000000
>   
> - Once I have enough memory allocated, OVMF is very slow to start.
>   OVMF stalls for around 30 seconds, before continuing:
>   In this time, VNC displays:
>   Guest has not initialized the display (yet).
>   OVMF debug build with domU xl.cfg:
>   device_model_args=[ "-debugcon", "file:debug.log", "-global", 
> "isa-debugcon.iobase=0x402"]
>   shows:
>   SecCoreStartupWithStack(0xFFFCC000, 0x820000)
>   OVMF can take around 10 minutes with VNC blank between showing the 
> Tianocore logo, and the ISO boot menu.
>
> - ISO load: hdtype="ahci" "/mnt/iso/archlinux-2018.03.01-x86_64.iso, raw, 
> xvda, cdrom"
>   - sometimes normal boot, sometimes mount root from initram fails, in both 
> OVMF or SeaBIOS
>   - have not worked out why:
>     - It happens with no PCI devices passed through, when  
> rdm="strategy=host,policy=relaxed"
>     - I have not seen it happen without: rdm="strategy=host,policy=relaxed"
>   - sometimes cdrom is /dev/sda{1,2} instead of /dev/sr0 in booted guest.
>   - in root mount fail: /dev/disk/by-label/ARCH_201803 will point to 
> ../../sda2:
>     vbd vbd-51712: failed to write error node for device/vbd/51712 (19) 
> xenbus_dev_probe on device/vbd/51712
>     Falling back to interactive prompt.
>     Manually continue boot with:
>     mount /dev/sda1 /run/archiso/bootmnt; exit
>
>   - SeaBIOS sometimes has /dev/sr0
>     ls -l /dev/disk/by-label/ARCH_201803 
>     lrwxrwxrwx 1 root root 9 Apr 13 00:00 /dev/disk/by-label/ARCH_201803 -> 
> ../../sr0
>
>     lsblk
>     NAME  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>     loop0   7:0    0 442.1M  1 loop /run/archiso/sfs/airootfs
>     sr0    11:0    1   551M  0 rom  /run/archiso/bootmnt
>
>   - SeaBIOS sometimes has /dev/sda{1,2}
>     lsblk
>     NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
>     loop0    7:0    0 442.1M  1 loop /run/archiso/sfs/airootfs
>     sda      8:0    0   551M  0 disk 
>     ├─sda1   8:1    0   551M  0 part /run/archiso/bootmnt
>     └─sda2   8:2    0   256M  0 part 
>
>   - OVMF with 3100 memory:
>     sda      8:0    0   551M  0 disk 
>     ├─sda1   8:1    0   551M  0 part /run/archiso/bootmnt
>     └─sda2   8:2    0   256M  0 part 
>
>
> Guest config:
>
> name="test_arch.cfg"
> type="hvm"
> memory=3100
> bios="ovmf"
> device_model_args=[ "-debugcon", "file:debug.log", "-global", 
> "isa-debugcon.iobase=0x402"]
> hdtype="ahci"
> disk=[
> "/mnt/iso/archlinux-2018.03.01-x86_64.iso, raw, xvda, cdrom"
> ]
> vif=[ "mac=00:16:3e:5f:1a:a2" ]
> vnclisten="0.0.0.0"
> vncunused=1
> vncpasswd="pass"
> rdm="strategy=host,policy=relaxed"
> pci=[
>     "00:1d.0,seize=1"
> ]
>
>
>
> A more  complicated OVMF (for this test, commit 6d2d2e6e5b) PCI passthrough 
> setup with multiple devices failed
> I only tried this once, and have not looked any further into it yet
> OVMF debug log ended with:
>
> InitRootBridge: populated root bus 0, with room for 0 subordinate bus(es)
> RootBridge: PciRoot(0x0)
>   Support/Attr: 7007F / 7007F
>     DmaAbove4G: No
> NoExtConfSpace: Yes
>      AllocAttr: 0 ()
>            Bus: 0 - 0
>             Io: C000 - C34F
>            Mem: F30A8000 - F30AA4FF
>     MemAbove4G: F3040000 - F30A7FFF
>           PMem: F0000000 - F2FFFFFF
>    PMemAbove4G: E0000000 - EFFFFFFF
> ASSERT 
> /build/ovmf/src/edk2/MdeModulePkg/Bus/Pci/PciHostBridgeDxe/PciRootBridgeIo.c(106):
>  Bridge->MemAbove4G.Base >= 0x0000000100000000ULL
>
>
>


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.