[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] LSI SAS2008 Option Rom Failure



On Tue, 31 Jul 2012, David Erickson wrote:
> On Tue, Jul 31, 2012 at 4:39 AM, Stefano Stabellini
> <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> > On Tue, 31 Jul 2012, David Erickson wrote:
> >> Just got back in town, following up on the prior discussion.  I
> >> successfully compiled the latest code (25688 and qemu upstream
> >> 5e3bc7144edd6e4fa2824944e5eb16c28197dd5a), but am still having
> >> problems during initialization of the card in the guest, in particular
> >> the unsupported delivery mode 3 which seems to cause interrupt related
> >> problems during init.  I've again attached the qemu-dm-log, and xl
> >> dmesg log files, and additionally screenshots of the guest dmesg and
> >> also for comparison starting the same livecd natively on the box.
> >
> > "unsupported delivery mode 3" means that the Linux guest is trying to
> > remap the MSI onto an event channel but Xen is still trying to deliver
> > the MSI using the emulated code path anyway.
> >
> > Adding
> >
> > #define XEN_PT_LOGGING_ENABLED 1
> >
> > at the top of hw/xen_pt.h and posting the additional QEMU logs could
> > be helpful.
> >
> > The full Xen logs might also be useful. I would add some more tracing to
> > the hypervisor too:
> >
> > diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
> > index b5975d1..08f4ab7 100644
> > --- a/xen/drivers/passthrough/io.c
> > +++ b/xen/drivers/passthrough/io.c
> > @@ -474,6 +474,11 @@ static void hvm_pci_msi_assert(
> >  {
> >      struct pirq *pirq = dpci_pirq(pirq_dpci);
> >
> > +    printk("DEBUG %s pirq=%d hvm_domain_use_pirq=%d emuirq=%d\n", __func__,
> > +            pirq->pirq,
> > +            hvm_domain_use_pirq(d, pirq),
> > +            pirq->arch.hvm.emuirq);
> > +
> >      if ( hvm_domain_use_pirq(d, pirq) )
> >          send_guest_pirq(d, pirq);
> >      else
> 
> Hi Stefano-
> I made the modifications (it looks like that DEFINE hasn't been used
> in awhile, caused a few compilation issues, I had to prefix most of
> the logged variables with s->hostaddr.), and am attaching the
> qemu-dm-ubuntu.log and dmesg from xl.  You referred to full Xen logs,
> where do I find those at?

Thanks for the logs!
You can get the full Xen logs from the serial console but you can also
grab the last few lines with "xl dmesg", like you did and it seems to be
enough in this case.


The initial MSI remapping has been done:

[00:05.0] msi_msix_setup: requested pirq 4 for MSI-X (vec: 0, entry: 0)
[00:05.0] msi_msix_update: Updating MSI-X with pirq 4 gvec 0 gflags 0x3037 
(entry: 0)

But the guest is not issuing the EVTCHNOP_bind_pirq hypercall that is
necessary to be able to receive event notifications (emuirq=-1 in the
Xen logs).

Now we need to figure out why: we still need more logs, this time on the
guest side.
What is the kernel version that you are using in the guest?
Could you please add "debug loglevel=9" to the guest kernel command line
and then post the guest dmesg again? 
It would be great if you could use the emulated serial to get the logs
rather than a picture. You can do that by adding serial='pty' to the VM
config file and console=ttyS0 to the guest command line.
This additional Xen change could also tell us if the EVTCHNOP_bind_pirq
has been done:


diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 53777f8..d65a97a 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -405,6 +405,8 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
 #ifdef CONFIG_X86
     if ( is_hvm_domain(d) && domain_pirq_to_irq(d, pirq) > 0 )
         map_domain_emuirq_pirq(d, pirq, IRQ_PT);
+    printk("DEBUG %s %d pirq=%d irq=%d emuirq=%d\n", __func__, __LINE__,
+            pirq, domain_pirq_to_irq(d, pirq), domain_pirq_to_emuirq(d, pirq));
 #endif
 
  out:

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.