[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] dom0 kenrel crashes for openstack + libvirt + libxl



Hi,

Roger is correct: after I applied that patch and reinstalled qemu, modified /etc/init.d/xencommons to use the modified version of qemu to provide disk backend, I finally succeeded to create a xen guest! I noticed that this patch was proposed yesterday: just in-time to solve my problem. :-)


root@compute1:/mnt/qemu-2.0.2# virsh --connect=xen:///
Welcome to virsh, the virtualization interactive terminal.

Type: Â'help' for help with commands
   Â'quit' to quit

virsh # list
ÂId  ÂName              State
----------------------------------------------------
Â1 Â Â instance-00000028 Â Â Â Â Â Â Ârunning

virsh #Â

I have been blocked by this problem for almost three weeks and as far as I know, I could not find any online tutorial on setting up xen based compute node for openstack juno. I will document steps I take and share them with the community.Â

Cheers,
-Xing

On Thu, Nov 13, 2014 at 3:57 AM, Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:
El 12/11/14 a les 19.41, Xing Lin ha escrit:
> Hi,
>
> I am aware that Xen via libvirt is in the group C support for openstack but
> since I am not able to install xenserver iso at compute machines I have, I
> have to consider to use xen with libvirt (xcp-xapi is not available for
> ubuntu14.04). I have three nodes, each one running ubuntu 14.04. I follow
> the instruction to install juno in ubuntu 14.04 and it works (I can create
> instances from openstack GUI - horizon) when I use kvm as the hypervisor at
> the compute node . However, if I switch to use xen as the hypervisor by
> installing xen-hypervisor-amd64 or nova-compute-xen, it will fail to create
> instances. It complained "backend for qemu disk is not ready". I checked
> and did not find qemu process running in dom0. I could not find
> /etc/init.d/xencommons either. Then, I compiled and installed xen-4.4 from
> source code and I got xencommons in /etc/init.d. qemu process is running in
> dom0. However, the dom0 kernel crashes this time when I try to launch an
> instance from openstack GUI. I am able to create a xen guest with
> virt-install from command line, with the following command.
>
> "$ virt-install --connect=xen:/// --name u14.04.3 --ram 1024 --disk
> u14.04.3.img,size=4,driver_name=qemu --location
> http://ftp.ubuntu.com/ubuntu/dists/trusty/main/installer-amd64/ --network
> bridge=virbr0"
>
>
> dmesg:
>
> [ 9443.130600] blkfront: xvda: flush diskcache: enabled; persistent grants:
> enabled; indirect descriptors: disabled;
> [ 9443.132818]Â xvda: xvda1
> [ 9444.604489] xen:grant_table: WARNING: g.e. 0x30 still in use!
> [ 9444.604496] deferring g.e. 0x30 (pfn 0xffffffffffffffff)
> [ 9444.604499] xen:grant_table: WARNING: g.e. 0x31 still in use!
> [ 9444.604502] deferring g.e. 0x31 (pfn 0xffffffffffffffff)
> [ 9444.604505] xen:grant_table: WARNING: g.e. 0x32 still in use!
> [ 9444.604508] deferring g.e. 0x32 (pfn 0xffffffffffffffff)
>Â Â==== lots of them====
> [ 9444.604719] xen:grant_table: WARNING: g.e. 0xe still in use!
> [ 9444.604721] deferring g.e. 0xe (pfn 0xffffffffffffffff)
> [ 9444.604723] xen:grant_table: WARNING: g.e. 0xd still in use!
> [ 9444.604725] deferring g.e. 0xd (pfn 0xffffffffffffffff)
> [ 9448.325408] ------------[ cut here ]------------
> [ 9448.325421] WARNING: CPU: 5 PID: 19902 at
> /build/buildd/linux-3.13.0/arch/x86/xen/multicalls.c:129 xen_mc_flush+0x
> 1a9/0x1b0()
> [ 9448.325492] CPU: 5 PID: 19902 Comm: sudo Tainted: GFÂ Â Â Â Â O
> 3.13.0-33-generic #58-Ubuntu
> [ 9448.325494] Hardware name: Dell Inc. PowerEdge R710/00W9X3, BIOS 2.1.15
> 09/02/2010
> [ 9448.325497]Â 0000000000000009 ffff8802d13d9aa8 ffffffff8171bd04
> 0000000000000000
> [ 9448.325501]Â ffff8802d13d9ae0 ffffffff810676cd 0000000000000000
> 0000000000000001
> [ 9448.325505]Â ffff88030418ffe0 ffff88031d6ab180 0000000000000010
> ffff8802d13d9af0
> [ 9448.325509] Call Trace:
> [ 9448.325518]Â [<ffffffff8171bd04>] dump_stack+0x45/0x56
> [ 9448.325523]Â [<ffffffff810676cd>] warn_slowpath_common+0x7d/0xa0
> [ 9448.325526]Â [<ffffffff810677aa>] warn_slowpath_null+0x1a/0x20
> [ 9448.325530]Â [<ffffffff81004e99>] xen_mc_flush+0x1a9/0x1b0
> [ 9448.325534]Â [<ffffffff81006b99>] xen_set_pud_hyper+0x109/0x110
> [ 9448.325538]Â [<ffffffff81006c3b>] xen_set_pud+0x9b/0xb0
> [ 9448.325543]Â [<ffffffff81177f06>] __pmd_alloc+0xd6/0x110
> [ 9448.325548]Â [<ffffffff81182218>] move_page_tables+0x678/0x720
> [ 9448.325552]Â [<ffffffff8117ddb7>] ? vma_adjust+0x337/0x7d0
> [ 9448.325557]Â [<ffffffff811c23fd>] shift_arg_pages+0xbd/0x1b0
> [ 9448.325560]Â [<ffffffff81181671>] ? mprotect_fixup+0x151/0x290
> [ 9448.325564]Â [<ffffffff811c26cb>] setup_arg_pages+0x1db/0x200
> [ 9448.325570]Â [<ffffffff81213edc>] load_elf_binary+0x41c/0xd80
> [ 9448.325576]Â [<ffffffff8131d503>] ? ima_get_action+0x23/0x30
> [ 9448.325579]Â [<ffffffff8131c7e2>] ? process_measurement+0x82/0x2c0
> [ 9448.325584]Â [<ffffffff811c2f7f>] search_binary_handler+0x8f/0x1b0
> [ 9448.325587]Â [<ffffffff811c44f7>] do_execve_common.isra.22+0x5a7/0x7e0
> [ 9448.325591]Â [<ffffffff811c49c6>] SyS_execve+0x36/0x50
> [ 9448.325596]Â [<ffffffff8172cc99>] stub_execve+0x69/0xa0
> [ 9448.325599] ---[ end trace 53ac16782e751c0a ]---
> [ 9448.347994] ------------[ cut here ]------------
> [ 9448.348004] WARNING: CPU: 1 PID: 19902 at
> /build/buildd/linux-3.13.0/mm/mmap.c:2736 exit_mmap+0x16b/0x170()
> ==== more message ignored ====
>
>
> I sent emails to the openstack mailing list and they gave me a few
> suggestions. At last, Bob Ball suggested I should probably ask this
> question on xen-devel mailing list. Any help will be highly appreciated!

This looks like the bug already reported by George Dunlap, could you try
to apply the following patch to qemu-xen and recompile?

http://lists.xenproject.org/archives/html/xen-devel/2014-11/msg01112.html

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.