[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen-unstable: irq.c:xxxx: dom1: forcing unbind of pirq 40

Tuesday, December 9, 2014, 7:05:32 PM, you wrote:

> Tuesday, December 9, 2014, 6:29:08 PM, you wrote:

>>>>> On 09.12.14 at 17:24, <linux@xxxxxxxxxxxxxx> wrote:
>>> Currently i'm running Xen-unstable with Stefano's libxl fixup patches (not 
>>> doing 
>>> reset due to qmp race, switching order of xc_domain_irq_permission and 
>>> xc_physdev_unmap_pirq)
>>> and with a kernel with Konrad's 3.19 patch series. 
>>> Although i have seen these messages before, so i don't know if it is a 
>>> recent 
>>> regression (don't think so). 
>>> I still see these in xl dmesg when shutting down guests with pci 
>>> passthrough:
>>> "irq.c:xxxx: dom1: forcing unbind of pirq 40"
>>> It seems to cleanup ok afterwards, but from the sound warning it shouldn't 
>>> get
>>> to that point. I get these for both PV and HVM guests.

>> These messages simply indicate that the guest didn't do some
>> expected cleanup itself.

> OK, the thing is that at present is does this on every shutdown of guests 
> with 
> pci-passthrough, all running a recent kernel. That hasn't always been that 
> way 
> as far as i can remember. I will see if i have some old 
> guest kernels and see if i get the same there.

The most ancient i had around was 3.12, tested on the PV guest, but no 

>> I see these too when Dom0 shuts down,
>> at least for certain drivers. For HVM arguably qemu might know
>> better and tear things down properly, but in no event are these
>> messages - when occurring in connection with a guest going
>> down - indications of a problem. We could even see to lower their
>> severity or suppress them altogether when the domain is known
>> to be dying (you may want to check whether ->is_dying is already
>> set at that point).

> Just tested, but for both PV and HVM when it enters "unmap_domain_pirq" 
d->>is_dying ? 1 : 0 returns 0.

>> Jan

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.