[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] MSI message data register configuration in Xen guests



On Mon, Jun 25, 2012 at 7:51 PM, Rolu <rolu@xxxxxxxx> wrote:
>
> On Tue, Jun 26, 2012 at 4:38 AM, Deep Debroy <ddebroy@xxxxxxxxx> wrote:
> > Hi, I was playing around with a MSI capable virtual device (so far
> > submitted as patches only) in the upstream qemu tree but having
> > trouble getting it to work on a Xen hvm guest. The device happens to
> > be a QEMU implementation of VMWare's pvscsi controller. The device
> > works fine in a Xen guest when I switch the device's code to force
> > usage of legacy interrupts with upstream QEMU. With MSI based
> > interrupts, the device works fine on a KVM guest but as stated before,
> > not on a Xen guest. After digging a bit, it appears, the reason for
> > the failure in Xen guests is that the MSI data register in the Xen
> > guest ends up with a value of 4300 where the Deliver Mode value of 3
> > happens to be reserved (per spec) and therefore illegal. The
> > vmsi_deliver routine in Xen rejects MSI interrupts with such data as
> > illegal (per expectation) causing all commands issued by the guest OS
> > on the device to timeout.
> >
> > Given this above scenario, I was wondering if anyone can shed some
> > light on how to debug this further for Xen. Something I would
> > specifically like to know is where the MSI data register configuration
> > actually happens. Is it done by some code specific to Xen and within
> > the Xen codebase or it all done within QEMU?
> >
>
> This seems like the same issue I ran into, though in my case it is
> with passed through physical devices. See
> http://lists.xen.org/archives/html/xen-devel/2012-06/msg01423.html and
> the older messages in that thread for more info on what's going on. No
> fix yet but help debugging is very welcome.

Thanks Rolu for pointing out the other thread - it was very useful.
Some of the symptoms appear to be identical in my case. However, I am
not using a pass-through device. Instead, in my case it's a fully
virtualized device pretty much identical to a raw file backed disk
image where the controller is pvscsi rather than lsi. Therefore I
guess some of the latter discussion in the other thread around
pass-through specific areas of code in qemu are not relevant? Please
correct me if I am wrong. Also note that I am using upstream qemu
where neither the #define for PT_PCI_MSITRANSLATE_DEFAULT nor
xenstore.c exsits (which is where Stefano's suggested change appeared
to be).

So far, here's what I am observing in the hvm linux guest :

On the guest side, as discussed in the other thread,
xen_hvm_setup_msi_irqs is invoked for the device and a value of 0x4300
is being by xen_msi_compose_msg that is written in the data register.
On the qemu (upstream) side, when the virtualized controller is trying
to complete a request, it's invoking the following chain of calls ->
stl_le_phys -> xen_apic_mem_write -> xen_hvm_inject_msi
On the xen side, this ends up in: hvmop_inject_msi -> hvm_inject_msi
-> vmsi_deliver. vmsi_deliver, as previously discussed, rejects the
delivery mode of 0x3.

Is the above sequence of interactions the expected path for a HVM
guest trying to use a fully virtualized device/controller that uses
MSI in upstream qemu? If so, if a standard linux guest always
populates the value of 0x4300 in the MSI data register through
xen_hvm_setup_msi_irqs, how are MSI notifications from a device in
qemu supposed to work given the delivery type of 0x3 is indeed
reserved and bypass the the vmsi_deliver check?

Thanks,
Deep

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.