[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] null domains after xl destroy



On 04/05/17 04:58, Steven Haigh wrote:
On 04/05/17 01:53, Juergen Gross wrote:
On 03/05/17 12:45, Steven Haigh wrote:
Just wanted to give this a little nudge now people seem to be back on
deck...

Glenn, could you please give the attached patch a try?

It should be applied on top of the other correction, the old debug
patch should not be applied.

I have added some debug output to make sure we see what is happening.

This patch is included in kernel-xen-4.9.26-1

It should be in the repos now.


Still seeing the same issue. Without the extra debug patch all I see in the logs after destroy is this...

xen-blkback: xen_blkif_disconnect: busy
xen-blkback: xen_blkif_free: delayed = 0
br0: port 2(vif1.0) entered disabled state
br0: port 2(vif1.0) entered disabled state
device vif1.0 left promiscuous mode
br0: port 2(vif1.0) entered disabled state

Without the dd running in the domU, the domU exits cleanly on destroy and I don't see the busy message from above, eg

xen-blkback: xen_blkif_free: delayed = 0
xen-blkback: xen_blkif_free: delayed = 0
br0: port 2(vif5.0) entered disabled state
br0: port 2(vif5.0) entered disabled state
device vif5.0 left promiscuous mode
br0: port 2(vif5.0) entered disabled state


Regards, Glenn
http://rimuhosting.com


{ domid=1; xl sysrq $domid s; sleep 2; xl destroy $domid; }

# xl list
Name ID Mem VCPUs State Time(s) Domain-0 0 1512 2 r----- 37.9 (null) 1 8 4 --p--d 9.5

# xl inf
host                   : host480.rimuhosting.com
release                : 4.9.26-1.el6xen.x86_64
version                : #1 SMP Thu May 4 02:07:34 AEST 2017
machine                : x86_64
nr_cpus                : 4
max_cpu_id             : 3
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 1
cpu_mhz                : 2394
hw_caps : b7ebfbff:0000e3bd:20100800:00000001:00000000:00000000:00000000:00000000
virt_caps              :
total_memory           : 8190
free_memory            : 6573
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 7
xen_extra              : .2
xen_version            : 4.7.2
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          :
xen_commandline : dom0_mem=1512M cpufreq=xen dom0_max_vcpus=2 dom0_vcpus_pin loglvl=debug vcpu_migration_delay=1000
cc_compiler            : gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-18)
cc_compile_by          : mockbuild
cc_compile_domain      : xen.crc.id.au
cc_compile_date        : Wed Apr 19 18:17:37 AEST 2017
build_id               : d5f616d8c9d8b5decfdbdca9d19eacb60c666418
xend_config_format     : 4

(XEN) 'q' pressed -> dumping domain info (now=0x22E:795D0BA5)
(XEN) General information for domain 0:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN) nr_pages=387072 xenheap_pages=5 shared_pages=0 paged_pages=0 dirty_cpus={0-1} max_pages=4294967295
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=0000000d
(XEN) Rangesets belonging to domain 0:
(XEN) I/O Ports { 0-1f, 22-3f, 44-60, 62-9f, a2-cfb, d00-1007, 100c-ffff }
(XEN)     log-dirty  { }
(XEN)     Interrupts { 1-30 }
(XEN)     I/O Memory { 0-fedff, fef00-ffffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN) XenPage 000000000020e9c4: caf=c000000000000002, taf=7400000000000002 (XEN) XenPage 000000000020e9c3: caf=c000000000000001, taf=7400000000000001 (XEN) XenPage 000000000020e9c2: caf=c000000000000001, taf=7400000000000001 (XEN) XenPage 000000000020e9c1: caf=c000000000000001, taf=7400000000000001 (XEN) XenPage 00000000000e7d2e: caf=c000000000000002, taf=7400000000000002
(XEN) NODE affinity for domain 0: [0]
(XEN) VCPU information and callbacks for domain 0:
(XEN) VCPU0: CPU0 [has=T] poll=0 upcall_pend=00 upcall_mask=00 dirty_cpus={0}
(XEN)     cpu_hard_affinity={0} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) VCPU1: CPU1 [has=T] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={1}
(XEN)     cpu_hard_affinity={1} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) General information for domain 1:
(XEN)     refcnt=1 dying=2 pause_count=2
(XEN) nr_pages=2113 xenheap_pages=0 shared_pages=0 paged_pages=0 dirty_cpus={} max_pages=1280256
(XEN)     handle=7a5642fc-2372-4174-9508-aae2ad1f6908 vm_assist=0000000d
(XEN) Rangesets belonging to domain 1:
(XEN)     I/O Ports  { }
(XEN)     log-dirty  { }
(XEN)     Interrupts { }
(XEN)     I/O Memory { }
(XEN) Memory pages belonging to domain 1:
(XEN)     DomPage 0000000000071c00: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c01: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c02: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c03: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c04: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c05: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c06: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c07: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c08: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c09: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0a: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0b: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0c: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0d: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0e: caf=00000001, taf=7400000000000001
(XEN)     DomPage 0000000000071c0f: caf=00000001, taf=7400000000000001
(XEN) NODE affinity for domain 1: [0]
(XEN) VCPU information and callbacks for domain 1:
(XEN) VCPU0: CPU0 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={0-3} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) VCPU1: CPU1 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={0-3} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) VCPU2: CPU2 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={0-3} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=0
(XEN)     No periodic timer
(XEN) VCPU3: CPU3 [has=F] poll=0 upcall_pend=00 upcall_mask=01 dirty_cpus={}
(XEN)     cpu_hard_affinity={0-3} cpu_soft_affinity={0-3}
(XEN)     pause_count=0 pause_flags=1
(XEN)     No periodic timer
(XEN) Notifying guest 0:0 (virq 1, port 4)
(XEN) Notifying guest 0:1 (virq 1, port 10)
(XEN) Notifying guest 1:0 (virq 1, port 0)
(XEN) Notifying guest 1:1 (virq 1, port 0)
(XEN) Notifying guest 1:2 (virq 1, port 0)
(XEN) Notifying guest 1:3 (virq 1, port 0)
(XEN) Shared frames 0 -- Saved frames 0

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.