[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] xen-hypervisor-4.3-amd64 4.3.0-3+b1 -> Intel M/B, I/O virt. enabled, start vm -> Kernel panic

On Mon, Jun 02, 2014 at 10:34:51AM +0100, David Vrabel wrote:
> On 01/06/14 17:18, Jo Mills wrote:
> > Hi Konrad et al,
> >
> > I have had no reply from the e1000-devel list about my "e1000 Tx Hang"
> > problem, but I may have stumbled on something relevant.
> >
> > Today I created a new VM (wheezy) running on a Wheezy dom0 - my node
> > blue in the previous e-mail trail.  I decided to use the e1000e device
> > at 0000:01:00.0 via pciback as this device is currently free (I'm
> > building a replacement for an out of date DMZ VM).
> Can you check you have 4704fe4f03a5ab27e3c36184af85d5000e0f8a48
> (xen/events: mask events when changing their VCPU binding) in your domU
> kernel?
> David

Hi David,

Many thanks for the quick reply.  To the best of my ability I have
checked and I believe the above is in the domU kernel.

root@vm-server-22:~# cat /proc/version
  Linux version 3.2.0-4-amd64 (debian-kernel@xxxxxxxxxxxxxxxx) \
    (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.57-3+deb7u1

root@white:~# dpkg -l linux-source-3.2
  ii  linux-source-3.2                   3.2.57-3+deb7u1        all

and then from the above sources, if I look in
linux-source-3.2/drivers/xen/events.c I can see:

/* Rebind an evtchn so that it gets delivered to a specific cpu */
static int rebind_irq_to_cpu(unsigned irq, unsigned tcpu)
        struct shared_info *s = HYPERVISOR_shared_info;
        struct evtchn_bind_vcpu bind_vcpu;
        int evtchn = evtchn_from_irq(irq);
        int masked;

        if (!VALID_EVTCHN(evtchn))
                return -1;

         * Events delivered via platform PCI interrupts are always
         * routed to vcpu 0 and hence cannot be rebound.
        if (xen_hvm_domain() && !xen_have_vector_callback)
                return -1;

        /* Send future instances of this interrupt to other vcpu. */
        bind_vcpu.port = evtchn;
        bind_vcpu.vcpu = tcpu;

         * Mask the event while changing the VCPU binding to prevent
         * it being delivered on an unexpected VCPU.
        masked = sync_test_and_set_bit(evtchn, s->evtchn_mask);

         * If this fails, it usually just indicates that we're dealing with a
         * virq or IPI channel, which don't actually need to be rebound. Ignore
         * it, but don't do the xenlinux-level rebind in that case.
        if (HYPERVISOR_event_channel_op(EVTCHNOP_bind_vcpu, &bind_vcpu) >= 0)
                bind_evtchn_to_cpu(evtchn, tcpu);

        if (!masked)

        return 0;

Best regards,


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.