[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Ubuntu 12.04 as a PV error: xenconsole: Could not read tty from store



On Tue, 2015-05-26 at 12:00 -0700, Konstantin wrote:
> On 2015-05-26 02:08, Ian Campbell wrote:
> > On Mon, 2015-05-25 at 21:59 -0700, Konstantin wrote:
> >> Hi,
> >> 
> >> When I'm trying to start Ubuntu server 12.04 installation on one of my
> >> Dom0 I got an error:
> >> 
> >> # xl -vvv create /etc/xen/vm-confs/test -c
> > 
> > Dumb idea: Does putting the -c before the cfg file name help?
> > 
> > If you boot without -c then can you connect with xl console afterwards?
> > 
> 
> No. Machine becomes destroyed after a few seconds of run anyway.
> 
> second vm-confs # xl -vvv create /etc/xen/vm-confs/test
> Parsing config from /etc/xen/vm-confs/test
> libxl: debug: libxl_create.c:1345:do_domain_create: ao 0x11e7840: create: 
> how=(nil) callback=(nil) poller=0x11dc930
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk 
> vdev=xvda spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk 
> vdev=xvda, using backend phy
> libxl: debug: libxl_create.c:797:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:327:libxl__bootloader_run: no bootloader 
> configured, using user supplied kernel
> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch 
> w=0x11dd1d8: deregister unregistered
> libxl: debug: libxl_numa.c:478:libxl__get_numa_candidate: New best NUMA 
> placement candidate found: nr_nodes=1, nr_cpus=8, nr_vcpus=21, 
> free_memkb=15034
> libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate 
> with 1 nodes, 8 cpus and 15034 KB free selected
> domainbuilder: detail: xc_dom_allocate: cmdline="", features="(null)"
> libxl: debug: libxl_dom.c:357:libxl__build_pv: pv kernel mapped 0 path 
> /etc/xen/kernels/precise/vmlinuz
> domainbuilder: detail: xc_dom_kernel_file: 
> filename="/etc/xen/kernels/precise/vmlinuz"
> domainbuilder: detail: xc_dom_malloc_filemap    : 4849 kB
> domainbuilder: detail: xc_dom_ramdisk_file: 
> filename="/etc/xen/kernels/precise/initrd.gz"
> domainbuilder: detail: xc_dom_malloc_filemap    : 30178 kB
> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.4, caps xen-3.0-x86_64 
> xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
> domainbuilder: detail: xc_dom_parse_image: called
> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
> domainbuilder: detail: loader probe failed
> domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
> domainbuilder: detail: xc_dom_malloc            : 18254 kB
> domainbuilder: detail: xc_dom_do_gunzip: unzip ok, 0x4b34e6 -> 0x11d3aa0
> domainbuilder: detail: loader probe OK
> xc: detail: elf_parse_binary: phdr: paddr=0x1000000 memsz=0xad5000
> xc: detail: elf_parse_binary: phdr: paddr=0x1c00000 memsz=0xe50e0
> xc: detail: elf_parse_binary: phdr: paddr=0x1ce6000 memsz=0x14480
> xc: detail: elf_parse_binary: phdr: paddr=0x1cfb000 memsz=0x364000
> xc: detail: elf_parse_binary: memory: 0x1000000 -> 0x205f000
> xc: detail: elf_xen_parse_note: GUEST_OS = "linux"
> xc: detail: elf_xen_parse_note: GUEST_VERSION = "2.6"
> xc: detail: elf_xen_parse_note: XEN_VERSION = "xen-3.0"
> xc: detail: elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
> xc: detail: elf_xen_parse_note: ENTRY = 0xffffffff81cfb200
> xc: detail: elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
> xc: detail: elf_xen_parse_note: FEATURES = 
> "!writable_page_tables|pae_pgdir_above_4gb"
> xc: detail: elf_xen_parse_note: PAE_MODE = "yes"
> xc: detail: elf_xen_parse_note: LOADER = "generic"
> xc: detail: elf_xen_parse_note: unknown xen elf note (0xd)
> xc: detail: elf_xen_parse_note: SUSPEND_CANCEL = 0x1
> xc: detail: elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
> xc: detail: elf_xen_parse_note: PADDR_OFFSET = 0x0
> xc: detail: elf_xen_addr_calc_check: addresses:
> xc: detail:     virt_base        = 0xffffffff80000000
> xc: detail:     elf_paddr_offset = 0x0
> xc: detail:     virt_offset      = 0xffffffff80000000
> xc: detail:     virt_kstart      = 0xffffffff81000000
> xc: detail:     virt_kend        = 0xffffffff8205f000
> xc: detail:     virt_entry       = 0xffffffff81cfb200
> xc: detail:     p2m_base         = 0xffffffffffffffff
> domainbuilder: detail: xc_dom_parse_elf_kernel: xen-3.0-x86_64: 
> 0xffffffff81000000 -> 0xffffffff8205f000
> domainbuilder: detail: xc_dom_mem_init: mem 1024 MB, pages 0x40000 pages, 4k 
> each
> domainbuilder: detail: xc_dom_mem_init: 0x40000 pages
> domainbuilder: detail: xc_dom_boot_mem_init: called
> domainbuilder: detail: x86_compat: guest xen-3.0-x86_64, address size 64
> domainbuilder: detail: xc_dom_malloc            : 2048 kB
> domainbuilder: detail: xc_dom_build_image: called
> domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 
> 0xffffffff81000000 -> 0xffffffff8205f000  (pfn 0x1000 + 0x105f pages)
> domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 
> 0x1000+0x105f at 0x7fa918e24000
> xc: detail: elf_load_binary: phdr 0 at 0x7fa918e24000 -> 0x7fa9198f9000
> xc: detail: elf_load_binary: phdr 1 at 0x7fa919a24000 -> 0x7fa919b090e0
> xc: detail: elf_load_binary: phdr 2 at 0x7fa919b0a000 -> 0x7fa919b1e480
> xc: detail: elf_load_binary: phdr 3 at 0x7fa919b1f000 -> 0x7fa919bf7000
> domainbuilder: detail: xc_dom_alloc_segment:   ramdisk      : 
> 0xffffffff8205f000 -> 0xffffffff86b0a000  (pfn 0x205f + 0x4aab pages)
> domainbuilder: detail: xc_dom_malloc            : 448 kB
> domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 
> 0x6b0a+0x200 at 0x7fa914179000
> domainbuilder: detail: xc_dom_alloc_page   :   start info   : 
> 0xffffffff86d0a000 (pfn 0x6d0a)
> domainbuilder: detail: xc_dom_alloc_page   :   xenstore     : 
> 0xffffffff86d0b000 (pfn 0x6d0b)
> domainbuilder: detail: xc_dom_alloc_page   :   console      : 
> 0xffffffff86d0c000 (pfn 0x6d0c)
> domainbuilder: detail: nr_page_tables: 0x0000ffffffffffff/48: 
> 0xffff000000000000 -> 0xffffffffffffffff, 1 table(s)
> domainbuilder: detail: nr_page_tables: 0x0000007fffffffff/39: 
> 0xffffff8000000000 -> 0xffffffffffffffff, 1 table(s)
> domainbuilder: detail: nr_page_tables: 0x000000003fffffff/30: 
> 0xffffffff80000000 -> 0xffffffffbfffffff, 1 table(s)
> domainbuilder: detail: nr_page_tables: 0x00000000001fffff/21: 
> 0xffffffff80000000 -> 0xffffffff86ffffff, 56 table(s)
> domainbuilder: detail: xc_dom_alloc_segment:   page tables  : 
> 0xffffffff86d0d000 -> 0xffffffff86d48000  (pfn 0x6d0d + 0x3b pages)
> domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 
> 0x6d0d+0x3b at 0x7fa91f8f6000
> domainbuilder: detail: xc_dom_alloc_page   :   boot stack   : 
> 0xffffffff86d48000 (pfn 0x6d48)
> domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 
> 0xffffffff86d49000
> domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 
> 0xffffffff87000000
> domainbuilder: detail: xc_dom_boot_image: called
> domainbuilder: detail: arch_setup_bootearly: doing nothing
> domainbuilder: detail: xc_dom_compat_check: supported guest type: 
> xen-3.0-x86_64 <= matches
> domainbuilder: detail: xc_dom_compat_check: supported guest type: 
> xen-3.0-x86_32p
> domainbuilder: detail: xc_dom_compat_check: supported guest type: 
> hvm-3.0-x86_32
> domainbuilder: detail: xc_dom_compat_check: supported guest type: 
> hvm-3.0-x86_32p
> domainbuilder: detail: xc_dom_compat_check: supported guest type: 
> hvm-3.0-x86_64
> domainbuilder: detail: xc_dom_update_guest_p2m: dst 64bit, pages 0x40000
> domainbuilder: detail: clear_page: pfn 0x6d0c, mfn 0x229fa8
> domainbuilder: detail: clear_page: pfn 0x6d0b, mfn 0x229fa9
> domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 
> 0x6d0a+0x1 at 0x7fa91f9af000
> domainbuilder: detail: start_info_x86_64: called
> domainbuilder: detail: setup_hypercall_page: vaddr=0xffffffff81001000 
> pfn=0x1001
> domainbuilder: detail: domain builder memory footprint
> domainbuilder: detail:    allocated
> domainbuilder: detail:       malloc             : 20865 kB
> domainbuilder: detail:       anon mmap          : 0 bytes
> domainbuilder: detail:    mapped
> domainbuilder: detail:       file mmap          : 34 MB
> domainbuilder: detail:       domU mmap          : 93 MB
> domainbuilder: detail: arch_setup_bootlate: shared_info: pfn 0x0, mfn 0xbeae2
> domainbuilder: detail: shared_info_x86_64: called
> domainbuilder: detail: vcpu_x86_64: called
> domainbuilder: detail: vcpu_x86_64: cr3: pfn 0x6d0d mfn 0x229fa7
> domainbuilder: detail: launch_vm: called, ctxt=0x7fa91f9b0004
> domainbuilder: detail: xc_dom_release: called
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk 
> vdev=xvda spec.backend=phy
> libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x11de678 
> wpath=/local/domain/0/backend/vbd/23/51712/state token=3/0: register slotnum=3
> libxl: debug: libxl_create.c:1359:do_domain_create: ao 0x11e7840: inprogress: 
> poller=0x11dc930, flags=i
> libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x11de678 
> wpath=/local/domain/0/backend/vbd/23/51712/state token=3/0: event 
> epath=/local/domain/0/backend/vbd/23/51712/state
> libxl: debug: libxl_event.c:657:devstate_watch_callback: backend 
> /local/domain/0/backend/vbd/23/51712/state wanted state 2 still waiting state 
> 1
> libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x11de678 
> wpath=/local/domain/0/backend/vbd/23/51712/state token=3/0: event 
> epath=/local/domain/0/backend/vbd/23/51712/state
> libxl: debug: libxl_event.c:653:devstate_watch_callback: backend 
> /local/domain/0/backend/vbd/23/51712/state wanted state 2 ok
> libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch 
> w=0x11de678 wpath=/local/domain/0/backend/vbd/23/51712/state token=3/0: 
> deregister slotnum=3
> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch 
> w=0x11de678: deregister unregistered
> libxl: debug: libxl_device.c:1023:device_hotplug: calling hotplug script: 
> /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch 
> w=0x11de700: deregister unregistered
> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch 
> w=0x11de700: deregister unregistered
> libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x11e0298 
> wpath=/local/domain/0/backend/vif/23/0/state token=3/1: register slotnum=3
> libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x11e0298 
> wpath=/local/domain/0/backend/vif/23/0/state token=3/1: event 
> epath=/local/domain/0/backend/vif/23/0/state
> libxl: debug: libxl_event.c:657:devstate_watch_callback: backend 
> /local/domain/0/backend/vif/23/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x11e0298 
> wpath=/local/domain/0/backend/vif/23/0/state token=3/1: event 
> epath=/local/domain/0/backend/vif/23/0/state
> libxl: debug: libxl_event.c:653:devstate_watch_callback: backend 
> /local/domain/0/backend/vif/23/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch 
> w=0x11e0298 wpath=/local/domain/0/backend/vif/23/0/state token=3/1: 
> deregister slotnum=3
> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch 
> w=0x11e0298: deregister unregistered
> libxl: debug: libxl_device.c:1023:device_hotplug: calling hotplug script: 
> /etc/xen/scripts/vif-bridge online
> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch 
> w=0x11e0320: deregister unregistered
> libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch 
> w=0x11e0320: deregister unregistered
> libxl: debug: libxl_event.c:1761:libxl__ao_progress_report: ao 0x11e7840: 
> progress report: ignored
> libxl: debug: libxl_event.c:1591:libxl__ao_complete: ao 0x11e7840: complete, 
> rc=0
> libxl: debug: libxl_event.c:1563:libxl__ao__destroy: ao 0x11e7840: destroy
> xc: debug: hypercall buffer: total allocations:570 total releases:570
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> xc: debug: hypercall buffer: cache current size:4
> xc: debug: hypercall buffer: cache hits:559 misses:4 toobig:7
> 
> So the problem isn't a console related, right?

I'm not sure. There is no error that I can see in the above.

That suggests that perhaps the guest is starting successfully but is
crashing very soon after boot.

Are there any interesting logs under /var/log/xen? In particular there
should be an xl log showing the shutdown/crash.

You could try adding on_FOO="preserve" to your cfg, for FOO in {crashed,
shutdown, rebooted} (check the mnan page for the proper names, I'm going
from memory)

You might also want to enable xenconsoled logging, not sure how that
happens under Ubuntu 12.04, in the initscripts shipped by upstream I
would edit /etc/default/xencommons to set XENCONSOLED_TRACE=guest, which
ultimately ends up adding --log=guest to the invocation of xenconsoled.
The resulting logs would be in /var/log/xen/console/. Under Ubuntu you
might need to edit XENCONSOLE_ARGS in /etc/default/*xen*?

Lastly adding guest_loglvl=all to your hypervisor command line might
allow some extra guest debug to get as far as Xen's own console.

> > While the guest is running you should see the pty somewhere in
> > "xenstore-ls -fp", this is the "tty from store" which is being referred
> > to in the error message..
> > 
> > Ian.
> 
> When I tried to run a VM using strace I also noticed many ioct strings 
> like below. Not sure if it is related to my problem though.

strace doesn't know about the special /proc/xen/privcmd and so tries to
report them as SNDCTL_* ioctls. EAGAIN I think is just hypercall
preemption, so quite normal I think.

> # strace -fff xl -vvv create -c /etc/xen/vm-confs/quartz
[....]
> [pid 20892] ioctl(15, SNDCTL_DSP_RESET, 0x7ffccffda100) = -1 EAGAIN 
> (Resource temporarily unavailable)
> [pid 20892] ioctl(15, SNDCTL_DSP_RESET, 0x7ffccffda100) = -1 EAGAIN 
> (Resource temporarily unavailable)
> [pid 20892] ioctl(15, SNDCTL_DSP_RESET, 0x7ffccffda100) = -1 EAGAIN 
> (Resource temporarily unavailable)
> [pid 20892] ioctl(15, SNDCTL_DSP_RESET, 0x7ffccffda100) = -1 EAGAIN 
> (Resource temporarily unavailable)
> ...
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.