[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] dom0 kernel - irq nobody cared ... the continuing saga ..



On Tue, Feb 10, 2015 at 02:07:05PM +0100, Sander Eikelenboom wrote:
> 
> Tuesday, February 10, 2015, 10:35:36 AM, you wrote:
> 
> >>>> On 10.02.15 at 10:19, <linux@xxxxxxxxxxxxxx> wrote:
> >>> Coming back to the /proc/interrupts output you posted earlier:
> >> 
> >>> /proc/interrupts shows the high count:
> >> 
> >>>            CPU0       CPU1       CPU2       CPU3
> >>>   8:          0          0          0          0  xen-pirq-ioapic-edge  
> >>> rtc0
> >>>   9:          1          0          0          0  xen-pirq-ioapic-level  
> >>> acpi
> >>>  16:         29          0          0          0  xen-pirq-ioapic-level  
> >>> ehci_hcd:usb3
> >>>  18:     200000          0          0          0  xen-pirq-ioapic-level  
> >>> ata_generic
> >> 
> >>> I would have thought that xen-pciback would install an interrupt
> >>> handler here too when a device using IRQ 18 gets handed to a
> >>> guest. May there be something broken in xen_pcibk_control_isr()?
> >> 
> >> It seems to only do that for PV guests, not for HVM.
> 
> > Interesting; no idea why that would be.
> 
> Hi Jan,
> 
> Coming back to this part .. with some additional debugging on HVM guest start 
> i 
> get:
> [   84.362097] pciback 0000:02:00.0: resetting (FLR, D3,  bus) the device
> [   84.422884] pciback 0000:02:00.0: xen_pcibk_reset_device: me here
> [   84.452639] pciback 0000:02:00.0: xen_pcibk_control_isr: return !enable && 
> !dev_data->isr_on enable:0 dev_data->isr_on:0
> [   85.636725] pciback 0000:00:19.0: resetting (FLR, D3,  ) the device
> [   85.774845] pciback 0000:00:19.0: xen_pcibk_reset_device: me here
> [   85.810857] pciback 0000:00:19.0: xen_pcibk_control_isr: return !enable && 
> !dev_data->isr_on enable:0 dev_data->isr_on:0
> [   85.865075] xen_pciback: vpci: 0000:02:00.0: assign to virtual slot 0
> [   85.886926] pciback 0000:02:00.0: registering for 1
> [   85.907936] xen_pciback: vpci: 0000:00:19.0: assign to virtual slot 1
> [   85.929490] pciback 0000:00:19.0: registering for 1
>  
> So it bails out on:

/me nods. That is with the 'reset' set, so no installing of the ISR.

Let me cook up a patch that will always install the ISR.

>         /* Asked to disable, but ISR isn't runnig */
>         if (!enable && !dev_data->isr_on){
>                 dev_warn(&dev->dev, "%s: return !enable && !dev_data->isr_on 
> enable:%d dev_data->isr_on:%d\n", __func__, enable, dev_data->isr_on ? 1 : 0);
>                 return;
>         }
> 
> --
> Sander 
> 
> >> Don't know how it wil go now after the bios update,
> >> lspci lists the SMBUS is also using irq 18 now, but it doesn't register
> >> a driver (according to lspci -k) and it doesn't appear in dom0's 
> >> /proc/interrupts.
> 
> > With that I don't think you're going to have problems, as the IRQ
> > then is masked.
> 
> >> How are things supposed to work with a machine with:
> >> - a shared irq
> >> - iommu + interrupt remapping enabled
> >> - HVM guests
> 
> > Konrad?
> 
> >> Would dom0 always see the legacy irq or is Xen or the iommu routing it 
> >> directly to 
> >> the guest ?
> 
> > A shared IRQ can't be routed directly, as all involved domains need
> > to see it.
> 
> >> And what would i suppose to see when using Xen's debug key 'i', should 
> >> there 
> >> be an entry routing it to both guests ?
> 
> > Yes, if both have a driver using it.
> 
> >>. if you know some better way to figure out where the irq storm is 
> >> coming from or headed to .. i'm all ears (or eyes) ..
> 
> > I suppose that's because there's no handler installed by pciback, yet
> > IRQs generated by the passed through device also arrive in Dom0,
> > and the driver for the device left in Dom0 doesn't claim the interrupts.
> 
> > Jan
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.