[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] vnc=1 / pvgrub / close fb: backend at /local/domain/0/backend/vfb/xx/0



On Thu, 2014-10-23 at 02:53 +1100, Steven Haigh wrote:
> On 23/10/2014 2:40 AM, Ian Campbell wrote:
> > On Thu, 2014-10-23 at 02:23 +1100, Steven Haigh wrote:
> > 
> >> Output using pv-grub:
> > 
> > Can you also post the qemu logs please (under /var/log/xen somewhere I
> > think).
> 
> I get very little out of this:
> -rw-r--r--  1 root root    0 Oct 23 02:45 qemu-dm-dev.vm.log
> -rw-r--r--  1 root root    0 Oct 23 02:44 xen-hotplug.log
> -rw-r--r--  1 root root   55 Oct 23 02:45 xl-dev.vm.log
> [root@dom0 xen]# cat xl-dev.vm.log
> Waiting for domain dev.vm (domid 36) to die [pid 6970]
> 
> That's it :\

:-/ indeed.

> >> qemu-system-i38[3956]: segfault at 0 ip           (null) sp
> >> 00007fffb4573638 error 4
> > 
> > That might be a smoking gun. Is there a core dump and/or could you try
> > and run qemu under gdb?
> 
> Any hints on doing this? I can't say I'm a gdb guru.... I can't find any
> core dumps anywhere so that's not really helpful...

Fiddling with ulimit might cause core dumps to be created. If not then

https://lists.gnu.org/archive/html/qemu-devel/2014-04/msg00302.html
https://lists.gnu.org/archive/html/qemu-devel/2011-12/msg02575.html

have some hints on running qemu via gdbserver. I've also had luck by
configuring the guest with a device model which is a script that dumps
its args to a file ("echo $@ > /tmp/qemu.args") and then sleeps for an
hour, in another terminal you can then run (fairly quickly, before xl
times out) something like:
         # gdb /path/to/qemu
         (gdb) run [the content of that file]
or possibly even
        # gdb --args /path/to/qemu `cat /tmp/qemu.args
        (gdb) run
        
After it crashes the "bt" will get a back trace.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.