[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Problem with PCI Pass-through address space collision


  • To: <xen-users@xxxxxxxxxxxxx>
  • From: Jon Skilling <jon_skilling@xxxxxxxxxxx>
  • Date: Mon, 20 May 2013 18:04:19 +0100
  • Delivery-date: Tue, 21 May 2013 02:02:58 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: Ac5Ve/PwKVw2wdDKT9emioqKOn4gAg==

Hi,

 

Iâve been trying to configure Xeon on my HP ML350 G4 server for the past two weeks and despite reading just about every word of the Xen wiki and numerous other posts and mails, I canât find a solution to my problem. Any help on this would be much appreciated!

 

Setup:

 

HP ML350 G4, Dual xeon, 6Gb Ram, 6 disk scsi raid array, ÂDigium TDM410P analogue PBX card on PCI. Hardware virtualization (Vt-d) is not an option with this machine.

 

I followed these instructions (more or less) to set up Dom0 and DomU:

 

http://www.howtoforge.com/virtualization-with-xen-on-centos-6.3-x86_64-paravirtualization-and-hardware-virtualization

 

with the following changes:

 

Host Dom0 (Centos 6.4):

xen-4.2.2-4.el6.x86_64

kernel-xen-3.9.2-1.el6xen.x86_64

libvirt 1.0.3-1 (python-virtinstall causes libvirt to be upgraded to 1.0.3. From checking the source, the Xen patch appears to be there already, so no recompile needed â the Xen patch doesnât work with this source anyway.

XEND has been disabled from boot up because it causes problems with XL tools although the same address space collision occurs if I use the XM tool set.

I have tried xen-pciback.hide(06:01.0) Âon the kernel module definitions in boot.conf but this doesnât seem to do anything. Adding records to modprobe.conf and rc.local work better.

The device Iâm trying to passthrough is defined:

 

06:01.0 Ethernet controller: Digium, Inc. Wildcard TDM410 4-port analog card (rev 11)

ÂÂÂÂÂÂÂ Subsystem: Digium, Inc. Wildcard TDM410 4-port analog card

ÂÂÂÂÂÂÂ Control: I/O- Mem- BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-

ÂÂÂÂÂÂÂ Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

ÂÂÂÂÂÂÂ Interrupt: pin A routed to IRQ 16

ÂÂÂÂÂÂÂ Region 0: I/O ports at 5000 [disabled] [size=256]

ÂÂÂÂÂÂÂ Region 1: Memory at fdef0000 (32-bit, non-prefetchable) [disabled] [size=1K]

ÂÂÂÂÂÂÂ [virtual] Expansion ROM at f0000000 [disabled] [size=128K]

ÂÂÂÂÂÂÂ Capabilities: [c0] Power Management version 2

ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=100mA PME(D0+,D1+,D2+,D3hot+,D3cold+)

ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-

ÂÂÂÂÂÂÂ Kernel driver in use: pciback

 

Guest DomU (Centos 6.4):

kernel-xen-3.9.2-1.el6xen.x86_64

Created using virt-install onto a 20G LVM with 1024Mb ram

XML for DomU dumped and converted to native then the domain destroyed and undefined and recreated using XL create with the new cfg file. This is to allow inclusion of pci [â06:01.0â] parameter in config.

 

 

Using the static setup, I can get Dom0 to hide the PCI device. I can also achieve the same effect with the dynamic set up using pci-assignable-attach and pci-attach. Here is the dmesg relating to the device. Reg 30 is highlighted because this seems to be where the problem is.

 

pci 0000:06:01.0: [d161:8005] type 00 class 0x020000

pci 0000:06:01.0: reg 10: [io 0x5000-0x50ff]

pci 0000:06:01.0: reg 14: [mem 0xfdef0000-0xfdef03ff]

pci 0000:06:01.0: reg 30: [mem 0x00000000-0x0001ffff pref]

pci 0000:06:01.0: supports D1 D2

pci 0000:06:01.0: PME# supported from D0 D1 D2 D3hot D3cold

pci 0000:06:01.0: BAR 6: assigned [mem 0xf0000000-0xf001ffff pref]

pciback 0000:06:01.0: seizing device

pciback 0000:06:01.0: PCI IRQ 48 -> rerouted to legacy IRQ 16

pciback 0000:06:01.0: PCI IRQ 48 -> rerouted to legacy IRQ 16

xen-pciback: vpci: 0000:06:01.0: assign to virtual slot 0

 

In the Dom0 I can define the device statically in the config file or dynamically as described above. Both scenarios result in the same error being displayed.

 

pcifront pci-0: Installing PCI frontend

pcifront pci-0: Creating PCI Frontend Bus 0000:00

pcifront pci-0: PCI host bridge to bus 0000:00

pci_bus 0000:00: root bus resource [io 0x0000-0xffff]

pci_bus 0000:00: root bus resource [mem 0x00000000-0xfffffffff]

pci_bus 0000:00: root bus resource [bus 00-ff]

pci 0000:00:00.0: [d161:8005] type 00 class 0x020000

pci 0000:00:00.0: reg 10: [io 0x5000-0x50ff]

pci 0000:00:00.0: reg 14: [mem 0xfdef0000-0xfdef03ff]

pci 0000:00:00.0: reg 30: [mem 0xf0000000-0xffffffff pref]

pci 0000:00:00.0: supports D1 D2

pcifront pci-0: claiming resource 0000:00:00.0/0

pcifront pci-0: claiming resource 0000:00:00.0/1

pcifront pci-0: claiming resource 0000:00:00.0/6

pci 0000:00:00.0: address space collision: [mem 0xf0000000-0xffffffff pref] conflicts with 0000:00:00.0 [mem 0xfdef0000-0xfdef03ff]

pcifront pci-0: Could not claim resource 0000:00:00.0/6! Device offline. Try using e820_host=1 in the guest config.

 

This appears to show that the PCI device is conflicting with itself (reg 14 with reg 30) because the address space for reg 30 is different in pciback to pcifront.

 

I have tried setting up the domain with both XM and XL with the same result

Adding passthrough and permissive settings with no change

Adding iommu=soft to guest kernel command line.

Iâve tried adding the e820_host flag to the config file but this doesnât seem to solve anything.

Different Xen enabled kernels.

Wiping the server and rebuilding the whole thing from scratch (more than once)

The Digium PCI card works fine on a normal Centos 6.3 setup with no Xen.

 

Iâm out of ideas now on how to solve this, so if anyone has made this card work by doing something different, Iâd be grateful for any suggestions. Iâve looked at the source for pcifont.c and come to the conclusion that my c coding skills are not going to be good enough to debug/change this program.

I can provide more dmesg outputs or other documentation if needed.

 

Thanks in advance for any help

 

Jon

 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.