[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: pv_ops dom0 kernel vga text console line offset problem
On Tue, Dec 15, 2009 at 11:08:16AM -0800, Jeremy Fitzhardinge wrote: > On 12/13/2009 08:31 AM, Pasi Kärkkäinen wrote: >> My newer x86_64 testbox shows a problem with vga text console, basicly >> when drm is initialized (without modesetting, kernel has "nomodeset" >> parameter), >> all the text gets some offset, ie. the text doesn't start from the >> beginning of the line, but from the middle of the screen. >> >> This happens with pv_ops dom0 kernel, using Xen 3.4.x hypervisor. The >> same kernel binary works OK without problems on baremetal (without Xen). >> >> I recorded a video of this behaviour, it's easiest to check it out: >> http://pasik.reaktio.net/xen/pv_ops-dom0-debug/xen_pvops_dom0_vga_console_line_offset_problem-xvid.avi >> >> Any ideas? >> >> Hardware is Supermicro Intel motherboard with ATI ES1000 graphics. >> > > Yes, that's very odd. Are you using any ATI-specific drivers, or is > this being used as plain generic VGA hardware? You have radeon drm; > what happens if you disable that? > I just hacked the initrd image to not load radeon/drm/ttm modules, and also blacklisted them in modprobe.d so that udev can't load them. And the offset-problem still happens.. It looks like it happens when USB modules are loaded.. ? (I verified after the system started that no radeon/drm/ttm modules were loaded). > I guess there's some register state which isn't being either set up > properly, or read back properly, so the kernel driver is out of sync > with the actual hardware state. But I'm not sure where I'd start > looking for the problem. > Hmm.. I wonder if it's actually related to the other USB problem I have: http://pasik.reaktio.net/xen/pv_ops-dom0-debug/dmesg-2.6.31.6-vga-text-console-line-offset-problem2.txt usb 1-5: new high speed USB device using ehci_hcd and address 2 ====================================================== [ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected ] 2.6.31.6 #8 ------------------------------------------------------ khubd/42 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire: (&retval->lock){......}, at: [<ffffffff811298c4>] dma_pool_alloc+0x45/0x321 and this task is already holding: (&ehci->lock){-.....}, at: [<ffffffff813db662>] ehci_urb_enqueue+0xb4/0xd7a which would create a new lock dependency: (&ehci->lock){-.....} -> (&retval->lock){......} but this new dependency connects a HARDIRQ-irq-safe lock: (&ehci->lock){-.....} ... which became HARDIRQ-irq-safe at: [<ffffffff81099a1d>] __lock_acquire+0x254/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff813da270>] ehci_irq+0x41/0x441 [<ffffffff813beef6>] usb_hcd_irq+0x59/0xcc [<ffffffff810c8834>] handle_IRQ_event+0x62/0x148 [<ffffffff810cadc7>] handle_level_irq+0x90/0xf9 [<ffffffff81018014>] handle_irq+0x9a/0xba [<ffffffff8130a929>] xen_evtchn_do_upcall+0x10c/0x1bd [<ffffffff810162fe>] xen_do_hypervisor_callback+0x1e/0x30 [<ffffffffffffffff>] 0xffffffffffffffff to a HARDIRQ-irq-unsafe lock: (purge_lock){+.+...} ... which became HARDIRQ-irq-unsafe at: ... [<ffffffff81099a91>] __lock_acquire+0x2c8/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff811229b4>] __purge_vmap_area_lazy+0x63/0x198 [<ffffffff81124280>] vm_unmap_aliases+0x18f/0x1b2 [<ffffffff8100e594>] xen_alloc_ptpage+0x47/0x75 [<ffffffff8100e5ff>] xen_alloc_pte+0x13/0x15 [<ffffffff81117bdb>] __pte_alloc_kernel+0x6f/0xdd [<ffffffff811237ad>] vmap_page_range_noflush+0x1c5/0x315 [<ffffffff8112393e>] map_vm_area+0x41/0x6b [<ffffffff81123a97>] __vmalloc_area_node+0x12f/0x167 [<ffffffff81123b5f>] __vmalloc_node+0x90/0xb5 [<ffffffff81123dd6>] __vmalloc+0x28/0x3e [<ffffffff81a2db10>] alloc_large_system_hash+0x12c/0x1f8 [<ffffffff81a30190>] vfs_caches_init+0xb8/0x140 [<ffffffff81a08629>] start_kernel+0x3ef/0x44c [<ffffffff81a07930>] x86_64_start_reservations+0xbb/0xd6 [<ffffffff81a0be20>] xen_start_kernel+0x5dc/0x5e3 [<ffffffffffffffff>] 0xffffffffffffffff other info that might help us debug this: 2 locks held by khubd/42: #0: (usb_address0_mutex){+.+...}, at: [<ffffffff813b9f1f>] hub_port_init+0x8c/0x7ec #1: (&ehci->lock){-.....}, at: [<ffffffff813db662>] ehci_urb_enqueue+0xb4/0xd7a the HARDIRQ-irq-safe lock's dependencies: -> (&ehci->lock){-.....} ops: 0 { IN-HARDIRQ-W at: [<ffffffff81099a1d>] __lock_acquire+0x254/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff813da270>] ehci_irq+0x41/0x441 [<ffffffff813beef6>] usb_hcd_irq+0x59/0xcc [<ffffffff810c8834>] handle_IRQ_event+0x62/0x148 [<ffffffff810cadc7>] handle_level_irq+0x90/0xf9 [<ffffffff81018014>] handle_irq+0x9a/0xba [<ffffffff8130a929>] xen_evtchn_do_upcall+0x10c/0x1bd [<ffffffff810162fe>] xen_do_hypervisor_callback+0x1e/0x30 [<ffffffffffffffff>] 0xffffffffffffffff INITIAL USE at: [<ffffffff81099b08>] __lock_acquire+0x33f/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff81513aca>] _spin_lock_irqsave+0x5d/0xab [<ffffffff813d8e35>] ehci_endpoint_reset+0x5a/0x12b [<ffffffff813be299>] usb_hcd_reset_endpoint+0x35/0x84 [<ffffffff813c22d5>] usb_enable_endpoint+0x52/0xa6 [<ffffffff813c2374>] usb_enable_interface+0x4b/0x74 [<ffffffff813c44b1>] usb_set_configuration+0x479/0x6af [<ffffffff813ce7d1>] generic_probe+0x6c/0xcb [<ffffffff813c5102>] usb_probe_device+0x6c/0xb4 [<ffffffff8135f5b3>] driver_probe_device+0xed/0x22a [<ffffffff8135f7e3>] __device_attach+0x4d/0x68 [<ffffffff8135e40d>] bus_for_each_drv+0x68/0xb3 [<ffffffff8135f8c5>] device_attach+0x75/0xa0 [<ffffffff8135e1d1>] bus_probe_device+0x3a/0x65 [<ffffffff8135c0bd>] device_add+0x3db/0x586 [<ffffffff813bb91e>] usb_new_device+0x169/0x232 [<ffffffff813c01b4>] usb_add_hcd+0x4be/0x6a8 [<ffffffff813cfe2a>] usb_hcd_pci_probe+0x263/0x3bd [<ffffffff8129601f>] local_pci_probe+0x2a/0x42 [<ffffffff8107e88f>] do_work_for_cpu+0x27/0x50 [<ffffffff81083bcc>] kthread+0xac/0xb4 [<ffffffff810161aa>] child_rip+0xa/0x20 [<ffffffffffffffff>] 0xffffffffffffffff } ... key at: [<ffffffff82674fe8>] __key.35644+0x0/0x8 -> (hcd_urb_list_lock){......} ops: 0 { INITIAL USE at: [<ffffffff81099b08>] __lock_acquire+0x33f/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff813bf2a0>] usb_hcd_link_urb_to_ep+0x37/0xc8 [<ffffffff813c0a11>] usb_hcd_submit_urb+0x308/0xa03 [<ffffffff813c17e1>] usb_submit_urb+0x258/0x2eb [<ffffffff813c3145>] usb_start_wait_urb+0x71/0x1d4 [<ffffffff813c3543>] usb_control_msg+0x10c/0x144 [<ffffffff813c4812>] usb_get_descriptor+0x83/0xc9 [<ffffffff813c48dd>] usb_get_device_descriptor+0x85/0xc8 [<ffffffff813c0168>] usb_add_hcd+0x472/0x6a8 [<ffffffff813cfe2a>] usb_hcd_pci_probe+0x263/0x3bd [<ffffffff8129601f>] local_pci_probe+0x2a/0x42 [<ffffffff8107e88f>] do_work_for_cpu+0x27/0x50 [<ffffffff81083bcc>] kthread+0xac/0xb4 [<ffffffff810161aa>] child_rip+0xa/0x20 [<ffffffffffffffff>] 0xffffffffffffffff } ... key at: [<ffffffff817edc38>] hcd_urb_list_lock+0x18/0x40 ... acquired at: [<ffffffff8109a242>] __lock_acquire+0xa79/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff813bf2a0>] usb_hcd_link_urb_to_ep+0x37/0xc8 [<ffffffff813db67b>] ehci_urb_enqueue+0xcd/0xd7a [<ffffffff813c0f76>] usb_hcd_submit_urb+0x86d/0xa03 [<ffffffff813c17e1>] usb_submit_urb+0x258/0x2eb [<ffffffff813c3145>] usb_start_wait_urb+0x71/0x1d4 [<ffffffff813c3543>] usb_control_msg+0x10c/0x144 [<ffffffff813ba1ac>] hub_port_init+0x319/0x7ec [<ffffffff813bd7f9>] hub_events+0x94d/0x11e6 [<ffffffff813be0d8>] hub_thread+0x46/0x1d2 [<ffffffff81083bcc>] kthread+0xac/0xb4 [<ffffffff810161aa>] child_rip+0xa/0x20 [<ffffffffffffffff>] 0xffffffffffffffff the HARDIRQ-irq-unsafe lock's dependencies: -> (purge_lock){+.+...} ops: 0 { HARDIRQ-ON-W at: [<ffffffff81099a91>] __lock_acquire+0x2c8/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff811229b4>] __purge_vmap_area_lazy+0x63/0x198 [<ffffffff81124280>] vm_unmap_aliases+0x18f/0x1b2 [<ffffffff8100e594>] xen_alloc_ptpage+0x47/0x75 [<ffffffff8100e5ff>] xen_alloc_pte+0x13/0x15 [<ffffffff81117bdb>] __pte_alloc_kernel+0x6f/0xdd [<ffffffff811237ad>] vmap_page_range_noflush+0x1c5/0x315 [<ffffffff8112393e>] map_vm_area+0x41/0x6b [<ffffffff81123a97>] __vmalloc_area_node+0x12f/0x167 [<ffffffff81123b5f>] __vmalloc_node+0x90/0xb5 [<ffffffff81123dd6>] __vmalloc+0x28/0x3e [<ffffffff81a2db10>] alloc_large_system_hash+0x12c/0x1f8 [<ffffffff81a30190>] vfs_caches_init+0xb8/0x140 <ffffffff81123dd6>] __vmalloc+0x28/0x3e [<ffffffff81a2db10>] alloc_large_system_hash+0x12c/0x1f8 [<ffffffff81a30190>] vfs_caches_init+0xb8/0x140 [<ffffffff81a08629>] start_kernel+0x3ef/0x44c [<ffffffff81a07930>] x86_64_start_reservations+0xbb/0xd6 [<ffffffff81a0be20>] xen_start_kernel+0x5dc/0x5e3 [<ffffffffffffffff>] 0xffffffffffffffff INITIAL USE at: [<ffffffff81099b08>] __lock_acquire+0x33f/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff811229b4>] __purge_vmap_area_lazy+0x63/0x198 [<ffffffff81124280>] vm_unmap_aliases+0x18f/0x1b2 [<ffffffff8100e594>] xen_alloc_ptpage+0x47/0x75 [<ffffffff8100e5ff>] xen_alloc_pte+0x13/0x15 [<ffffffff81117bdb>] __pte_alloc_kernel+0x6f/0xdd [<ffffffff811237ad>] vmap_page_range_noflush+0x1c5/0x315 [<ffffffff8112393e>] map_vm_area+0x41/0x6b [<ffffffff81123a97>] __vmalloc_area_node+0x12f/0x167 [<ffffffff81123b5f>] __vmalloc_node+0x90/0xb5 [<ffffffff81123dd6>] __vmalloc+0x28/0x3e [<ffffffff81a2db10>] alloc_large_system_hash+0x12c/0x1f8 [<ffffffff81a30190>] vfs_caches_init+0xb8/0x140 [<ffffffff81a08629>] start_kernel+0x3ef/0x44c [<ffffffff81a07930>] x86_64_start_reservations+0xbb/0xd6 [<ffffffff81a0be20>] xen_start_kernel+0x5dc/0x5e3 [<ffffffffffffffff>] 0xffffffffffffffff } ... key at: [<ffffffff817b59c8>] purge_lock.26578+0x18/0x40 -> (vmap_area_lock){+.+...} ops: 0 { HARDIRQ-ON-W at: [<ffffffff81099a91>] __lock_acquire+0x2c8/0xc0e x14a/0x20a [<ffffffff81123b46>] __vmalloc_node+0x77/0xb5 [<ffffffff81123dd6>] __vmalloc+0x28/0x3e [<ffffffff81a2db10>] alloc_large_system_hash+0x12c/0x1f8 [<ffffffff81a30190>] vfs_caches_init+0xb8/0x140 [<ffffffff81a08629>] start_kernel+0x3ef/0x44c [<ffffffff81a07930>] x86_64_start_reservations+0xbb/0xd6 [<ffffffff81a0be20>] xen_start_kernel+0x5dc/0x5e3 [<ffffffffffffffff>] 0xffffffffffffffff SOFTIRQ-ON-W at: [<ffffffff81099ab2>] __lock_acquire+0x2e9/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff81122cee>] alloc_vmap_area+0x116/0x28a [<ffffffff81122fac>] __get_vm_area_node+0x14a/0x20a [<ffffffff81123b46>] __vmalloc_node+0x77/0xb5 [<ffffffff81123dd6>] __vmalloc+0x28/0x3e [<ffffffff81a2db10>] alloc_large_system_hash+0x12c/0x1f8 [<ffffffff81a30190>] vfs_caches_init+0xb8/0x140 [<ffffffff81a08629>] start_kernel+0x3ef/0x44c [<ffffffff81a07930>] x86_64_start_reservations+0xbb/0xd6 [<ffffffff81a0be20>] xen_start_kernel+0x5dc/0x5e3 [<ffffffffffffffff>] 0xffffffffffffffff INITIAL USE at: [<ffffffff81099b08>] __lock_acquire+0x33f/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff81122cee>] alloc_vmap_area+0x116/0x28a [<ffffffff81122fac>] __get_vm_area_node+0x14a/0x20a [<ffffffff81123b46>] __vmalloc_node+0x77/0xb5 [<ffffffff81123dd6sh+0x12c/0x1f8 [<ffffffff81a30190>] vfs_caches_init+0xb8/0x140 [<ffffffff81a08629>] start_kernel+0x3ef/0x44c [<ffffffff81a07930>] x86_64_start_reservations+0xbb/0xd6 [<ffffffff81a0be20>] xen_start_kernel+0x5dc/0x5e3 [<ffffffffffffffff>] 0xffffffffffffffff } ... key at: [<ffffffff817b5978>] vmap_area_lock+0x18/0x40 -> (&rnp->lock){..-...} ops: 0 { IN-SOFTIRQ-W at: [<ffffffff81099a3e>] __lock_acquire+0x275/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff81513aca>] _spin_lock_irqsave+0x5d/0xab [<ffffffff810ccbe5>] cpu_quiet+0x38/0xb0 [<ffffffff810cd385>] __rcu_process_callbacks+0x83/0x259 [<ffffffff810cd5b8>] rcu_process_callbacks+0x5d/0x76 [<ffffffff8106ec56>] __do_softirq+0xf6/0x1f0 [<ffffffff810162ac>] call_softirq+0x1c/0x30 [<ffffffff81017d5b>] do_softirq+0x5f/0xd7 [<ffffffff8106e56d>] irq_exit+0x66/0xbc [<ffffffff8130a9aa>] xen_evtchn_do_upcall+0x18d/0x1bd [<ffffffff810162fe>] xen_do_hypervisor_callback+0x1e/0x30 [<ffffffffffffffff>] 0xffffffffffffffff INITIAL USE at: [<ffffffff81099b08>] __lock_acquire+0x33f/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff81513aca>] _spin_lock_irqsave+0x5d/0xab [<ffffffff8150e73b>] rcu_init_percpu_data+0x3d/0x18b [<ffffffff8150e8d3>] rcu_cpu_notify+0x4a/0xa7 [<ffffffff81a2a055>] __rcu_init+0x168/0x1b3 [<ffffffff81a26e52>] rcu_init+0x1c/0x3e [<ffffffff81a084a2>] start_kernel+0x268/0x44c [<ffffffff81a07930>0x5e3 [<ffffffffffffffff>] 0xffffffffffffffff } ... key at: [<ffffffff8248c080>] __key.20517+0x0/0x8 ... acquired at: [<ffffffff8109a242>] __lock_acquire+0xa79/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff81513aca>] _spin_lock_irqsave+0x5d/0xab [<ffffffff810cd124>] __call_rcu+0x9d/0x13c [<ffffffff810cd229>] call_rcu+0x28/0x3e [<ffffffff81122934>] __free_vmap_area+0x7b/0x98 [<ffffffff81122a9a>] __purge_vmap_area_lazy+0x149/0x198 [<ffffffff81124280>] vm_unmap_aliases+0x18f/0x1b2 [<ffffffff8100da26>] xen_create_contiguous_region+0x4a/0xf3 [<ffffffff8101208c>] xen_swiotlb_alloc_coherent+0x84/0xef [<ffffffff813660dc>] dma_alloc_coherent+0xaf/0xec [<ffffffff8136623f>] dmam_alloc_coherent+0x67/0xc0 [<ffffffff813a357f>] ahci_port_start+0x63/0xf6 [<ffffffff8138ce24>] ata_host_start+0xf4/0x1a9 [<ffffffff8138d8ec>] ata_host_activate+0x39/0x102 [<ffffffff813a483e>] ahci_init_one+0xca3/0xcd7 [<ffffffff8129601f>] local_pci_probe+0x2a/0x42 [<ffffffff8107e88f>] do_work_for_cpu+0x27/0x50 [<ffffffff81083bcc>] kthread+0xac/0xb4 [<ffffffff810161aa>] child_rip+0xa/0x20 [<ffffffffffffffff>] 0xffffffffffffffff -> (&rcu_state.onofflock){..-...} ops: 0 { IN-SOFTIRQ-W at: [<ffffffff81099a3e>] __lock_acquire+0x275/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff810cca51>] rcu_start_gp+0xc5/0x154 [<ffffffff810ccb8e>] cpu_quiet_msk+0xae/0xcd [<ffffffff810ccea8>] rcu_process_dyntick+0xe0/0x11d [<ffffffff810cd006>] force_quiescent_state+0x121/0x1a2 [<ffffffff810cd34a>] __rcu_process_callbacks+0x48/0x259 [<ffffffff810cd599>] rcu_process_callbacks+0x3e/0x76 [<ffffffff8106ec56>] __do_softirq+0xf6/0x1f0 do_softirq+0x5f/0xd7 [<ffffffff8106e56d>] irq_exit+0x66/0xbc [<ffffffff8130a9aa>] xen_evtchn_do_upcall+0x18d/0x1bd [<ffffffff810162fe>] xen_do_hypervisor_callback+0x1e/0x30 [<ffffffffffffffff>] 0xffffffffffffffff INITIAL USE at: [<ffffffff81099b08>] __lock_acquire+0x33f/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff8150e7e4>] rcu_init_percpu_data+0xe6/0x18b [<ffffffff8150e8d3>] rcu_cpu_notify+0x4a/0xa7 [<ffffffff81a2a055>] __rcu_init+0x168/0x1b3 [<ffffffff81a26e52>] rcu_init+0x1c/0x3e [<ffffffff81a084a2>] start_kernel+0x268/0x44c [<ffffffff81a07930>] x86_64_start_reservations+0xbb/0xd6 [<ffffffff81a0be20>] xen_start_kernel+0x5dc/0x5e3 [<ffffffffffffffff>] 0xffffffffffffffff } ... key at: [<ffffffff817b0130>] rcu_state+0x14f0/0x1580 ... acquired at: [<ffffffff8109a242>] __lock_acquire+0xa79/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff8150e7f8>] rcu_init_percpu_data+0xfa/0x18b [<ffffffff8150e8d3>] rcu_cpu_notify+0x4a/0xa7 [<ffffffff81a2a055>] __rcu_init+0x168/0x1b3 [<ffffffff81a26e52>] rcu_init+0x1c/0x3e [<ffffffff81a084a2>] start_kernel+0x268/0x44c [<ffffffff81a07930>] x86_64_start_reservations+0xbb/0xd6 [<ffffffff81a0be20>] xen_start_kernel+0x5dc/0x5e3 [<ffffffffffffffff>] 0xffffffffffffffff ... acquired at: [<ffffffff8109a242>] __lock_acquire+0xa79/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff810cca51>] rcu_start_gp+0xc5/0x154 [<ffffffff810cd12f>] __call_rcu+0xa8/0x13c [<ffffffff810cd229>] call_rcu+p_area_lazy+0x149/0x198 [<ffffffff81124280>] vm_unmap_aliases+0x18f/0x1b2 [<ffffffff8100da26>] xen_create_contiguous_region+0x4a/0xf3 [<ffffffff8101208c>] xen_swiotlb_alloc_coherent+0x84/0xef [<ffffffff813660dc>] dma_alloc_coherent+0xaf/0xec [<ffffffff8136623f>] dmam_alloc_coherent+0x67/0xc0 [<ffffffff813a357f>] ahci_port_start+0x63/0xf6 [<ffffffff8138ce24>] ata_host_start+0xf4/0x1a9 [<ffffffff8138d8ec>] ata_host_activate+0x39/0x102 [<ffffffff813a483e>] ahci_init_one+0xca3/0xcd7 [<ffffffff8129601f>] local_pci_probe+0x2a/0x42 [<ffffffff8107e88f>] do_work_for_cpu+0x27/0x50 [<ffffffff81083bcc>] kthread+0xac/0xb4 [<ffffffff810161aa>] child_rip+0xa/0x20 [<ffffffffffffffff>] 0xffffffffffffffff ... acquired at: [<ffffffff8109a242>] __lock_acquire+0xa79/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff815138f7>] _spin_lock+0x45/0x8e [<ffffffff81122a83>] __purge_vmap_area_lazy+0x132/0x198 [<ffffffff81124280>] vm_unmap_aliases+0x18f/0x1b2 [<ffffffff8100da26>] xen_create_contiguous_region+0x4a/0xf3 [<ffffffff8101208c>] xen_swiotlb_alloc_coherent+0x84/0xef [<ffffffff813660dc>] dma_alloc_coherent+0xaf/0xec [<ffffffff8136623f>] dmam_alloc_coherent+0x67/0xc0 [<ffffffff813a357f>] ahci_port_start+0x63/0xf6 [<ffffffff8138ce24>] ata_host_start+0xf4/0x1a9 [<ffffffff8138d8ec>] ata_host_activate+0x39/0x102 [<ffffffff813a483e>] ahci_init_one+0xca3/0xcd7 [<ffffffff8129601f>] local_pci_probe+0x2a/0x42 [<ffffffff8107e88f>] do_work_for_cpu+0x27/0x50 [<ffffffff81083bcc>] kthread+0xac/0xb4 [<ffffffff810161aa>] child_rip+0xa/0x20 [<ffffffffffffffff>] 0xffffffffffffffff stack backtrace: Pid: 42, comm: khubd Not tainted 2.6.31.6 #8 Call Trace: [<ffffffff810996bb>] check_usage+0x29a/0x2bf [<ffffffff8109933b>] ? check_noncircular+0xa1/0xe8 [<ffffffff81099750>] check_irq_usage+0x70/0xe9 [<ffffffff8109a13c>] __lock_acquire+0x973/0xc0e [<ffffffff8109a4c5>] lock_acquire+0xee/0x12e [<ffffffff811298c4>] ? dma_pool_alloc+0x45/0x321 [<ffffffff811298c4>] ? dma_pool_alloc+0x45/0x321 [<ffffffff81513aca>] _spin_lock_irqsave+0x5d/0xab [<ffffffff811298c4>] ? dma_pool_alloc+0x45/0x321 [<ffffffff81097bc7>] ? add_lock_to_list+0x8a/0xe5 [<ffffffff811298c4>] dforce_evtchn_callback+0x20/0x36 [<ffffffff8100f632>] ? check_events+0x12/0x20 [<ffffffff813d959e>] ehci_qh_alloc+0x37/0xfe [<ffffffff8100f61f>] ? xen_restore_fl_direct_end+0x0/0x1 [<ffffffff813db182>] qh_append_tds+0x4c/0x478 [<ffffffff815136ce>] ? _spin_unlock+0x3a/0x55 [<ffffffff813db6a0>] ehci_urb_enqueue+0xf2/0xd7a [<ffffffff8100f632>] ? check_events+0x12/0x20 [<ffffffff813c0600>] ? usb_create_hcd+0x1a4/0x1ba [<ffffffff813c0f76>] usb_hcd_submit_urb+0x86d/0xa03 [<ffffffff8100f61f>] ? xen_restore_fl_direct_end+0x0/0x1 [<ffffffff8100eb68>] ? xen_force_evtchn_callback+0x20/0x36 [<ffffffff81098d57>] ? debug_check_no_locks_freed+0x13d/0x16a [<ffffffff8100f632>] ? check_events+0x12/0x20 [<ffffffff8100f61f>] ? xen_restore_fl_direct_end+0x0/0x1 [<ffffffff810979f8>] ? lockdep_init_map+0xad/0x138 [<ffffffff813c17e1>] usb_submit_urb+0x258/0x2eb [<ffffffff810844c3>] ? __init_waitqueue_head+0x4d/0x76 [<ffffffff813c3145>] usb_start_wait_urb+0x71/0x1d4 [<ffffffff813c3543>] usb_control_msg+0x10c/0x144 [<ffffffff813ba1ac>] hub_port_init+0x319/0x7ec [<ffffffff813bd7f9>] hub_events+0x94d/0x11e6 [<ffffffff8100f632>] ? check_events+0x12/0x20 [<ffffffff8100eb68>] ? xen_force_evtchn_callback+0x20/0x36 [<ffffffff813be0d8>] hub_thread+0x46/0x1d2 [<ffffffff8108401f>] ? autoremove_wake_function+0x0/0x5f [<ffffffff813be092>] ? hub_thread+0x0/0x1d2 [<ffffffff81083bcc>] kthread+0xac/0xb4 [<ffffffff810161aa>] child_rip+0xa/0x20 [<ffffffff81015b10>] ? restore_args+0x0/0x30 [<ffffffff810161a0>] ? child_rip+0x0/0x20 usb 1-5: New USB device found, idVendor=14dd, idProduct=0002 usb 1-5: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 1-5: Product: Multidevice usb 1-5: Manufacturer: Peppercon AG usb 1-5: SerialNumber: 0962b201b34649022DFD5493FE193F22 usb 1-5: configuration #1 chosen from 1 choice input: Peppercon AG Multidevice as /devices/pci0000:00/0000:00:1a.7/usb1/1-5/1-5:1.2/input/input3 generic-usb 0003:14DD:0002.0001: input,hidraw0: USB HID v1.01 Mouse [Peppercon AG Multidevice] on usb-0000:00:1a.7-5/input2 input: Peppercon AG Multidevice as /devices/pci0000:00/0000:00:1a.7/usb1/1-5/1-5:1.3/input/input4 generic-usb 0003:14DD:0002.0002: input,hidraw1: e] on usb-0000:00:1a.7-5/input3 -- Pasi _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |