[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] MSI message data register configuration in Xen guests


  • To: xen-devel@xxxxxxxxxxxxx
  • From: Deep Debroy <ddebroy@xxxxxxxxx>
  • Date: Mon, 25 Jun 2012 19:38:34 -0700
  • Delivery-date: Tue, 26 Jun 2012 02:39:11 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

Hi, I was playing around with a MSI capable virtual device (so far
submitted as patches only) in the upstream qemu tree but having
trouble getting it to work on a Xen hvm guest. The device happens to
be a QEMU implementation of VMWare's pvscsi controller. The device
works fine in a Xen guest when I switch the device's code to force
usage of legacy interrupts with upstream QEMU. With MSI based
interrupts, the device works fine on a KVM guest but as stated before,
not on a Xen guest. After digging a bit, it appears, the reason for
the failure in Xen guests is that the MSI data register in the Xen
guest ends up with a value of 4300 where the Deliver Mode value of 3
happens to be reserved (per spec) and therefore illegal. The
vmsi_deliver routine in Xen rejects MSI interrupts with such data as
illegal (per expectation) causing all commands issued by the guest OS
on the device to timeout.

Given this above scenario, I was wondering if anyone can shed some
light on how to debug this further for Xen. Something I would
specifically like to know is where the MSI data register configuration
actually happens. Is it done by some code specific to Xen and within
the Xen codebase or it all done within QEMU?

Thanks,
Deep

P.S. some details on the device's PCI configuration:

lspci output for a working instance in KVM:

00:07.0 SCSI storage controller: VMware PVSCSI SCSI Controller
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 255
Interrupt: pin A routed to IRQ 45
Region 0: Memory at febf0000 (32-bit, non-prefetchable) [size=32K]
Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
Address: 00000000fee0300c  Data: 4161
Kernel driver in use: vmw_pvscsi
Kernel modules: vmw_pvscsi

Here is the lspci output for the scenario where it's failing to work in Xen:

00:04.0 SCSI storage controller: VMware PVSCSI SCSI Controller
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 255
Interrupt: pin A routed to IRQ 80
Region 0: Memory at f3020000 (32-bit, non-prefetchable) [size=32K]
Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+
Address: 00000000fee36000  Data: 4300
Kernel driver in use: vmw_pvscsi
Kernel modules: vmw_pvscsi

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.