[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen4.2-rc3 test result



> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@xxxxxxxxxx]
> Sent: Friday, September 07, 2012 9:56 PM
> To: Ren, Yongjie
> Cc: Konrad Rzeszutek Wilk; 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan
> Beulich'
> Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> 
> On Fri, Sep 07, 2012 at 08:01:37AM +0000, Ren, Yongjie wrote:
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@xxxxxxxxxx]
> > > Sent: Thursday, September 06, 2012 7:12 PM
> > > To: Ren, Yongjie
> > > Cc: Konrad Rzeszutek Wilk; 'xen-devel'; 'Keir Fraser'; 'Ian Campbell';
> 'Jan
> > > Beulich'
> > > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> > >
> > > On Thu, Sep 06, 2012 at 08:18:16AM +0000, Ren, Yongjie wrote:
> > > > > -----Original Message-----
> > > > > From: Konrad Rzeszutek [mailto:ketuzsezr@xxxxxxxxx] On Behalf
> Of
> > > > > Konrad Rzeszutek Wilk
> > > > > Sent: Saturday, September 01, 2012 1:24 AM
> > > > > To: Ren, Yongjie
> > > > > Cc: 'xen-devel'; 'Keir Fraser'; 'Ian Campbell'; 'Jan Beulich'; 'Konrad
> > > > > Rzeszutek Wilk'
> > > > > Subject: Re: [Xen-devel] Xen4.2-rc3 test result
> > > >
> > > > > > 5. Dom0 cannot be shutdown before PCI detachment from guest
> and
> > > > > when pci assignment conflicts
> > > > > >   http://bugzilla.xen.org/bugzilla/show_bug.cgi?id=1826
> > > > >
> > > > > Um, so you are assigning the same VF to two guests. I am surprised
> > > that
> > > > > the tools even allowed you to do that. Was 'xm' allowing you to do
> > > that?
> > > > >
> > > > No, 'xl' doesn't allow me to do that. We can't assignment a device to
> > > different guests.
> > > > Sorry, the description of this bug is not accurate. I changed its title 
> > > > to
> > > "Dom0 cannot be shut down before PCI device detachment from a
> guest".
> > > > If a guest (with a PCI device assigned) is running, Dom0 will panic
> when
> > > shutting down.
> > >
> > > And does it panic if you use the 'irqpoll' option it asks for?
> > >
> > Adding 'irqpoll' makes no change.
> >
> > It should be a regression for Xen from 4.1 to 4.2.
> > I didn't meet this issue with 4.1 Xen and 3.5.3 Dom0.
> 
> Aaah. That was not clear from the bugzilla.
> 
> >
> > > Xen-pciback has no involvment as:
> > >
> > >
> > > [  283.747488] xenbus_dev_shutdown: backend/pci/1/0: Initialised !=
> > > Connected, skipping
> > >
> > > and the guest still keeps on getting interrupts.
> > >
> > > What is the stack at the hang?
> > >
> > [  283.747488] xenbus_dev_shutdown: backend/pci/1/0: Initialised !=
> Connected,
> > skipping
> > [  283.747505] xenbus_dev_shutdown: backend/vkbd/2/0: Initialising !=
> > Connected, skipping
> > [  283.747515] xenbus_dev_shutdown: backend/console/2/0:
> Initialising !=
> > Connected, skipping
> > [  380.236571] irq 16: nobody cared (try booting with the "irqpoll"
> option)
> > [  380.236588] Pid: 0, comm: swapper/0 Not tainted 3.4.4 #1
> > [  380.236596] Call Trace:
> > [  380.236601]  <IRQ>  [<ffffffff8110b538>]
> __report_bad_irq+0x38/0xd0
> > [  380.236626]  [<ffffffff8110b72c>] note_interrupt+0x15c/0x210
> > [  380.236637]  [<ffffffff81108ffa>]
> handle_irq_event_percpu+0xca/0x230
> > [  380.236648]  [<ffffffff811091b6>] handle_irq_event+0x56/0x90
> > [  380.236658]  [<ffffffff8110bde3>] handle_fasteoi_irq+0x63/0x120
> > [  380.236671]  [<ffffffff8131c8f1>]
> __xen_evtchn_do_upcall+0x1b1/0x280
> > [  380.236703]  [<ffffffff8131d51a>] xen_evtchn_do_upcall+0x2a/0x40
> > [  380.236716]  [<ffffffff816bc36e>]
> xen_do_hypervisor_callback+0x1e/0x30
> > [  380.236723]  <EOI>  [<ffffffff810013aa>] ?
> hypercall_page+0x3aa/0x1000
> > [  380.236742]  [<ffffffff810013aa>] ? hypercall_page+0x3aa/0x1000
> > [  380.236754]  [<ffffffff810538d0>] ? xen_safe_halt+0x10/0x20
> > [  380.236768]  [<ffffffff81067c9a>] ? default_idle+0x6a/0x1d0
> > [  380.236778]  [<ffffffff81067256>] ? cpu_idle+0x96/0xf0
> > [  380.236789]  [<ffffffff81689a18>] ? rest_init+0x68/0x70
> > [  380.236800]  [<ffffffff81ca9e33>] ? start_kernel+0x407/0x414
> > [  380.236810]  [<ffffffff81ca984a>] ? kernel_init+0x1e1/0x1e1
> > [  380.236821]  [<ffffffff81ca9346>] ?
> x86_64_start_reservations+0x131/0x136
> > [  380.236833]  [<ffffffff81cade7a>] ? xen_start_kernel+0x621/0x628
> > [  380.236841] handlers:
> > [  380.236850] [<ffffffff81428d80>] usb_hcd_irq
> > [  380.236860] Disabling IRQ #16
> 
> That is not the hang stack. That is the kernel telling you that something
> has gone astray with an interrupt. But its unclear what happend _after_
> that.
> 
> It might be that the system called the proper shutdown hypercall and its
> waiting for the hypervisor to do its stuff. Can you try using the 'q' to get
> a stack dump of the dom0 and see where its spinning/sitting please?
> 
(XEN) 'q' pressed -> dumping domain info (now=0x43F:E9B06F08)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=1048576 xenheap_pages=6 shared_pages=0 paged_pages=0 
dirty_cpus={1-24,26-30} max_pages=4294967295
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-3f7, 400-407, 40c-cfb, 
d00-ffff }
(XEN)     Interrupts { 0-315, 317-327 }
(XEN)     I/O Memory { 0-febff, fec01-fec3e, fec40-fec7e, fec80-fedff, 
fee01-ffffffffffffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 000000000043c8ad: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 000000000043c8ac: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000043c8ab: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000043c8aa: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000bb48e: caf=c000000000000002, taf=7400000000000002
(XEN)     XenPage 0000000000418d84: caf=c000000000000002, taf=7400000000000002
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU14 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={14} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU1: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={18} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU2: CPU15 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={15} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU3: CPU24 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={24} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU4: CPU27 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={27} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU5: CPU30 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={30} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU6: CPU24 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU7: CPU11 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={11} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU8: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={19} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU9: CPU23 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU10: CPU28 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={28} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU11: CPU29 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={29} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU12: CPU10 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={10} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU13: CPU17 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={17} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU14: CPU8 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={8} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU15: CPU22 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={22} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU16: CPU7 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={7} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU17: CPU16 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={16} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU18: CPU26 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={26} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU19: CPU13 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={13} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU20: CPU3 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={3} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU21: CPU12 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={12} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU22: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU23: CPU23 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={23} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU24: CPU20 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={20} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU25: CPU2 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={2} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU26: CPU1 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={1} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU27: CPU9 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={9} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU28: CPU5 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={5} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU29: CPU4 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={4} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU30: CPU21 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={21} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN)     VCPU31: CPU6 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={6} cpu_affinity={0-127}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) General information for domain 1:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=1047549 xenheap_pages=6 shared_pages=0 paged_pages=0 
dirty_cpus={} max_pages=1048832
(XEN)     handle=dd1ee134-05ef-4a65-a2fe-163bce610687 vm_assist=00000000
(XEN)     paging assistance: hap refcounts translate external 
(XEN) Rangesets belonging to domain 1:
(XEN)     I/O Ports  { }
(XEN)     Interrupts { 101-103 }
(XEN)     I/O Memory { ebb41-ebb43, ebb60-ebb63 }
(XEN) Memory pages belonging to domain 1:
(XEN)     DomPage list too long to display
(XEN)     PoD entries=0 cachesize=0
(XEN)     XenPage 000000000044000e: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000044000d: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 000000000044000c: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004a04f7: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000bd4f8: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004a0400: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 1:
(XEN)     VCPU0: CPU1 [has=F] poll=0 upcall_pend = 01, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=4
(XEN)     paging assistance: hap, 4 levels
(XEN)     No periodic timer
(XEN)     VCPU1: CPU10 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU2: CPU11 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU3: CPU12 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU4: CPU13 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU5: CPU14 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU6: CPU15 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU7: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU8: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU9: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU10: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU11: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU12: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU13: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU14: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU15: CPU0 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={0-15}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN) General information for domain 2:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=523261 xenheap_pages=6 shared_pages=0 paged_pages=0 
dirty_cpus={31} max_pages=524544
(XEN)     handle=b10a1595-a5b0-46e2-a67a-3c77c4755761 vm_assist=00000000
(XEN)     paging assistance: hap refcounts translate external 
(XEN) Rangesets belonging to domain 2:
(XEN)     I/O Ports  { }
(XEN)     Interrupts { }
(XEN)     I/O Memory { }
(XEN) Memory pages belonging to domain 2:
(XEN)     DomPage list too long to display
(XEN)     PoD entries=0 cachesize=0
(XEN)     XenPage 00000000004b44f3: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004b44f2: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004b44f1: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004b44f0: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000000bb08f: caf=c000000000000001, taf=7400000000000001
(XEN)     XenPage 00000000004b27a7: caf=c000000000000001, taf=7400000000000001
(XEN) VCPU information and callbacks for domain 2:
(XEN)     VCPU0: CPU31 [has=T] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={31} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=0
(XEN)     paging assistance: hap, 4 levels
(XEN)     No periodic timer
(XEN)     VCPU1: CPU16 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=4
(XEN)     paging assistance: hap, 4 levels
(XEN)     No periodic timer
(XEN)     VCPU2: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU3: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU4: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU5: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU6: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU7: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU8: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU9: CPU16 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU10: CPU17 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU11: CPU18 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU12: CPU19 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU13: CPU20 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU14: CPU21 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN)     VCPU15: CPU22 [has=F] poll=0 upcall_pend = 00, upcall_mask = 00 
dirty_cpus={} cpu_affinity={16-31}
(XEN)     pause_count=0 pause_flags=2
(XEN)     paging assistance: hap, 1 levels
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 5, stat 0/0/0)
(XEN) Notifying guest 0:1 (virq 1, port 12, stat 0/0/0)
[ 4672.954495] (XEN) Notifying guest 0:2 (virq 1, port 19, stat 0/0/0)

vcpu 0
  (XEN) Notifying guest 0:3 (virq 1, port 26, stat 0/0/0)
0: masked=0 pend(XEN) Notifying guest 0:4 (virq 1, port 33, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:5 (virq 1, port 40, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:6 (virq 1, port 47, stat 0/0/0)

  (XEN) Notifying guest 0:7 (virq 1, port 54, stat 0/0/0)
1: masked=1 pend(XEN) Notifying guest 0:8 (virq 1, port 61, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:9 (virq 1, port 68, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:10 (virq 1, port 75, stat 0/0/0)

  (XEN) Notifying guest 0:11 (virq 1, port 82, stat 0/0/0)
2: masked=1 pend(XEN) Notifying guest 0:12 (virq 1, port 89, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:13 (virq 1, port 96, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:14 (virq 1, port 103, stat 0/0/0)

  (XEN) Notifying guest 0:15 (virq 1, port 110, stat 0/0/0)
3: masked=1 pend(XEN) Notifying guest 0:16 (virq 1, port 117, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:17 (virq 1, port 124, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:18 (virq 1, port 131, stat 0/0/0)

  (XEN) Notifying guest 0:19 (virq 1, port 138, stat 0/0/0)
4: masked=1 pend(XEN) Notifying guest 0:20 (virq 1, port 145, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:21 (virq 1, port 152, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:22 (virq 1, port 159, stat 0/0/0)

  (XEN) Notifying guest 0:23 (virq 1, port 166, stat 0/0/0)
5: masked=1 pend(XEN) Notifying guest 0:24 (virq 1, port 173, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:25 (virq 1, port 180, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:26 (virq 1, port 187, stat 0/0/0)

  (XEN) Notifying guest 0:27 (virq 1, port 194, stat 0/0/0)
6: masked=1 pend(XEN) Notifying guest 0:28 (virq 1, port 201, stat 0/0/0)
ing=0 event_sel (XEN) Notifying guest 0:29 (virq 1, port 208, stat 0/0/0)
0000000000000000(XEN) Notifying guest 0:30 (virq 1, port 215, stat 0/0/0)

  (XEN) Notifying guest 0:31 (virq 1, port 222, stat 0/0/0)
7: masked=1 pend(XEN) Notifying guest 1:0 (virq 1, port 0, stat 0/-1/-1)
ing=0 event_sel (XEN) Notifying guest 1:1 (virq 1, port 0, stat 0/-1/0)
0000000000000000(XEN) Notifying guest 1:2 (virq 1, port 0, stat 0/-1/0)

  (XEN) Notifying guest 1:3 (virq 1, port 0, stat 0/-1/0)
8: masked=1 pend(XEN) Notifying guest 1:4 (virq 1, port 0, stat 0/-1/0)
ing=0 event_sel (XEN) Notifying guest 1:5 (virq 1, port 0, stat 0/-1/0)
0000000000000000(XEN) Notifying guest 1:6 (virq 1, port 0, stat 0/-1/0)

  (XEN) Notifying guest 1:7 (virq 1, port 0, stat 0/-1/0)
9: masked=1 pend(XEN) Notifying guest 1:8 (virq 1, port 0, stat 0/-1/0)
ing=1 event_sel (XEN) Notifying guest 1:9 (virq 1, port 0, stat 0/-1/0)
0000000000000002(XEN) Notifying guest 1:10 (virq 1, port 0, stat 0/-1/0)

  (XEN) Notifying guest 1:11 (virq 1, port 0, stat 0/-1/0)
10: masked=1 pen(XEN) Notifying guest 1:12 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 1:13 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 1:14 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 1:15 (virq 1, port 0, stat 0/-1/0)
11: masked=1 pen(XEN) Notifying guest 2:0 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 2:1 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 2:2 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 2:3 (virq 1, port 0, stat 0/-1/0)
12: masked=1 pen(XEN) Notifying guest 2:4 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 2:5 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 2:6 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 2:7 (virq 1, port 0, stat 0/-1/0)
13: masked=1 pen(XEN) Notifying guest 2:8 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 2:9 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 2:10 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 2:11 (virq 1, port 0, stat 0/-1/0)
14: masked=1 pen(XEN) Notifying guest 2:12 (virq 1, port 0, stat 0/-1/0)
ding=0 event_sel(XEN) Notifying guest 2:13 (virq 1, port 0, stat 0/-1/0)
 000000000000000(XEN) Notifying guest 2:14 (virq 1, port 0, stat 0/-1/0)
0
  (XEN) Notifying guest 2:15 (virq 1, port 0, stat 0/-1/0)
15: masked=1 pen(XEN) Shared frames 0 -- Saved frames 0
ding=(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input 
to Xen)


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.