[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Got kernel BUG message while PCI pass-through to domU
On Sat, Jan 21, 2012 at 10:32:37PM -0800, Eric Camachat wrote: > Hi all, > > I got kernel BUG message when PCI pass through to domU (xen-pv). > Is there any one seen this? Yes. The fix is in the upstream kernel. Please use 3.1. > > Eric > > dom0 kernel config: > $ grep XEN_PCI linux-2.6.32.24/.config > CONFIG_XEN_PCI_PASSTHROUGH=y > CONFIG_XEN_PCIDEV_FRONTEND=y > CONFIG_XEN_PCIDEV_BACKEND=y > # CONFIG_XEN_PCIDEV_BACKEND_VPCI is not set > CONFIG_XEN_PCIDEV_BACKEND_PASS=y > # CONFIG_XEN_PCIDEV_BACKEND_SLOT is not set > # CONFIG_XEN_PCIDEV_BACKEND_CONTROLLER is not set > # CONFIG_XEN_PCIDEV_BE_DEBUG is not set > > > When start domU, got this error message in dom0: > BUG: sleeping function called from invalid context at mm/slab.c:3016 > in_atomic(): 1, irqs_disabled(): 0, pid: 21, name: xenwatch > Pid: 21, comm: xenwatch Tainted: P 2.6.32.24-ws-symbol #1 > Call Trace: > [<ffffffff810365e4>] __might_sleep+0xf4/0x110 > [<ffffffff810ac478>] __kmalloc+0x118/0x140 > [<ffffffff811c1837>] kvasprintf+0x57/0x90 > [<ffffffff813fcd8a>] ? error_exit+0x2a/0x60 > [<ffffffff811c18d0>] kasprintf+0x60/0x70 > [<ffffffff810101c2>] ? check_events+0x12/0x20 > [<ffffffff810101af>] ? xen_restore_fl_direct_end+0x0/0x1 > [<ffffffff810ab6a7>] ? kfree+0x87/0xd0 > [<ffffffff8123328a>] ? read_reply+0x13a/0x150 > [<ffffffff812337ae>] join+0x4e/0x60 > [<ffffffff8123396c>] xenbus_read+0x2c/0x70 > [<ffffffff8123402f>] xenbus_scanf+0x2f/0xa0 > [<ffffffff8100f82d>] ? xen_force_evtchn_callback+0xd/0x10 > [<ffffffff810101c2>] ? check_events+0x12/0x20 > [<ffffffff810101af>] ? xen_restore_fl_direct_end+0x0/0x1 > [<ffffffff810ab6a7>] ? kfree+0x87/0xd0 > [<ffffffff8123391c>] ? xenbus_printf+0xbc/0xe0 > [<ffffffff8100f82d>] ? xen_force_evtchn_callback+0xd/0x10 > [<ffffffff8123a600>] pciback_publish_pci_root+0x40/0x1a0 > [<ffffffff810101af>] ? xen_restore_fl_direct_end+0x0/0x1 > nx4524-7AC4F0# [<ffffffff810ac53e>] ? kmem_cache_alloc+0x9e/0xf0 > [<ffffffff81010e26>] ? xen_register_device_domain_owner+0x86/0xc0 > [<ffffffff8123cc78>] pciback_publish_pci_roots+0x88/0xc0 > [<ffffffff8123a5c0>] ? pciback_publish_pci_root+0x0/0x1a0 > [<ffffffff8123aadd>] pciback_be_watch+0x1dd/0x290 > [<ffffffff81232d64>] ? xs_error+0x14/0x20 > [<ffffffff81233571>] ? xs_watch+0x51/0x60 > [<ffffffff81233b0d>] ? register_xenbus_watch+0xdd/0xf0 > [<ffffffff8123a900>] ? pciback_be_watch+0x0/0x290 > [<ffffffff8123b0a9>] pciback_xenbus_probe+0x139/0x140 > [<ffffffff81234f3a>] xenbus_dev_probe+0xaa/0x130 > [<ffffffff8127950b>] driver_probe_device+0x7b/0x160 > [<ffffffff812797e0>] ? __device_attach+0x0/0x50 > [<ffffffff8127981d>] __device_attach+0x3d/0x50 > [<ffffffff812785e8>] bus_for_each_drv+0x68/0x90 > [<ffffffff8127973b>] device_attach+0x7b/0x80 > [<ffffffff81278565>] bus_probe_device+0x25/0x40 > [<ffffffff81276ac3>] device_add+0x293/0x570 > [<ffffffff811b8da6>] ? kobject_init+0x36/0x80 > [<ffffffff81276db9>] device_register+0x19/0x20 > [<ffffffff81234780>] xenbus_probe_node+0x130/0x1b0 > [<ffffffff81234996>] xenbus_dev_changed+0x196/0x1a0 > [<ffffffff810101af>] ? xen_restore_fl_direct_end+0x0/0x1 > [<ffffffff8103652b>] ? __might_sleep+0x3b/0x110 > [<ffffffff812350b6>] backend_changed+0x16/0x20 > [<ffffffff81233e4b>] xenwatch_thread+0x10b/0x150 > [<ffffffff81057d90>] ? autoremove_wake_function+0x0/0x40 > [<ffffffff81233d40>] ? xenwatch_thread+0x0/0x150 > [<ffffffff81057c1e>] kthread+0x8e/0xa0 > [<ffffffff81014dfa>] child_rip+0xa/0x20 > [<ffffffff81013fa6>] ? int_ret_from_sys_call+0x7/0x1b > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |