[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Failure to boot HVM guest with more than 32 VCPUS



"Hao, Xudong" <xudong.hao@xxxxxxxxx> writes:

> Hi, 
>
> In X86_64 platform, we noticed an issue that Xen boot a RHEL6u6 or
> Fedora22 guest, when configure the VCPU more than 32, the guest will
> fail to boot up.

The issue is well-known for RHEL6.6 (and is fixed in 6.7 and in 6.6.z)
but Fedora22 should boot. The log below is from RHEL6.6, can you please
provide one from Fedora?

>
> Guest config:
> ====================================================
> memory = 2173
> vcpus=33
> vif = [ 'type=ioemu, mac=00:16:3e:7c:87:72, bridge=xenbr0' ]
> disk = [ '/root/tao/test/rhel6u6.qcow,qcow2,xvda,rw' ]
> device_model_override = '/usr/local/lib/xen/bin/qemu-system-i386'
> device_model_version = 'qemu-xen'
> sdl=0
> vnc=1
> stdvga=1
> acpi=1
> hpet=1
>
> Paste the partial guest log here.(whole log attached)
> =====================================================
> ...
> Disabled fast string operations
>  #31
> Disabled fast string operations
>  #32
> Disabled fast string operations
> CPU32: Stuck ??
> Brought up 32 CPUs
> Total of 32 processors activated (178772.22 BogoMIPS).
> BUG: soft lockup - CPU#0 stuck for 67s! [swapper:1]
> Modules linked in:
> CPU 0 
> Modules linked in:
>
> Pid: 1, comm: swapper Not tainted 2.6.32-504.el6.x86_64 #1 Xen HVM domU
> RIP: 0010:[<ffffffff810b7438>]  [<ffffffff810b7438>] 
> smp_call_function_many+0x1e8/0x260
> RSP: 0018:ffff88010cc1fca0  EFLAGS: 00000202
> RAX: 0000000000000080 RBX: ffff88010cc1fce0 RCX: 0000000000000020
> RDX: 0000000000000020 RSI: 0000000000000080 RDI: 0000000000000000
> RBP: ffffffff8100bb8e R08: ffff88010eeb0c10 R09: 0000000000000080
> R10: 00000000000001c8 R11: 0000000000000000 R12: ffff88010eee0600
> R13: ffff88010ecb02c0 R14: ffff88010eee0600 R15: ffffffff81324619
> FS:  0000000000000000(0000) GS:ffff88002c000000(0000) knlGS:0000000000000000
> CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> CR2: 0000000000000000 CR3: 0000000001a85000 CR4: 00000000001406f0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> Process swapper (pid: 1, threadinfo ffff88010cc1e000, task ffff88010cc1d500)
> Stack:
>  01ff88010cc1fcd0 ffff88010df10000 0000000000000246 ffffffff81172f60
> <d> ffff88010df10000 00000000ffffffff ffff88010caa1300 000000000000fbac
> <d> ffff88010cc1fcf0 ffffffff810b74d2 ffff88010cc1fd20 ffffffff8107d594
> Call Trace:
>  [<ffffffff81172f60>] ? do_ccupdate_local+0x0/0x40
>  [<ffffffff810b74d2>] ? smp_call_function+0x22/0x30
>  [<ffffffff8107d594>] ? on_each_cpu+0x24/0x50
>  [<ffffffff81176080>] ? do_tune_cpucache+0x110/0x550
>  [<ffffffff8117668b>] ? enable_cpucache+0x3b/0xf0
>  [<ffffffff81512aee>] ? setup_cpu_cache+0x21e/0x270
>  [<ffffffff8117733a>] ? kmem_cache_create+0x41a/0x5a0
>  [<ffffffff81142620>] ? shmem_init_inode+0x0/0x20
>  [<ffffffff81c51434>] ? shmem_init+0x3c/0xd7
>  [<ffffffff81c2a8b4>] ? kernel_init+0x26b/0x2f7
>  [<ffffffff8100c20a>] ? child_rip+0xa/0x20
>  [<ffffffff81c2a649>] ? kernel_init+0x0/0x2f7
>  [<ffffffff8100c200>] ? child_rip+0x0/0x20
> Code: e8 8e 5a 47 00 0f ae f0 48 8b 7b 30 ff 15 49 1f 9e 00 80 7d c7 00 0f 84 
> 9f fe ff ff f6 43 20 01 0f 84 95 fe ff ff 0f 1f 44 00 00 <f3> 90 f6 43 20 01 
> 75 f8 e9 83 fe ff ff 0f 1f 00 4c 89 ea 4c 89 
> Call Trace:
>  [<ffffffff81172f60>] ? do_ccupdate_local+0x0/0x40
>  [<ffffffff810b74d2>] ? smp_call_function+0x22/0x30
>  [<ffffffff8107d594>] ? on_each_cpu+0x24/0x50
>  [<ffffffff81176080>] ? do_tune_cpucache+0x110/0x550
>  [<ffffffff8117668b>] ? enable_cpucache+0x3b/0xf0
>  [<ffffffff81512aee>] ? setup_cpu_cache+0x21e/0x270
>  [<ffffffff8117733a>] ? kmem_cache_create+0x41a/0x5a0
>  [<ffffffff81142620>] ? shmem_init_inode+0x0/0x20
>  [<ffffffff81c51434>] ? shmem_init+0x3c/0xd7
>  [<ffffffff81c2a8b4>] ? kernel_init+0x26b/0x2f7
>  [<ffffffff8100c20a>] ? child_rip+0xa/0x20
>  [<ffffffff81c2a649>] ? kernel_init+0x0/0x2f7
>  [<ffffffff8100c200>] ? child_rip+0x0/0x20
>
> Best Regards,
> Xudong

-- 
  Vitaly

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.