[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] hhvm running on Ubuntu 13.04 with Xen Hypervisor - linux kernel panic



Not sure what you mean by post leaf for an AMD processor, what is the
command? (sorry just a dumb PHP developer here!)

Here the output you requested from dmesg:

dmesg | grep -i perf

[    0.004000] Initializing cgroup subsys perf_event
[    0.064156] Performance Events:

Sorry if it's not more helpful!

Craig.



On 22/09/2013 01:08, "Boris Ostrovsky" <boris.ostrovsky@xxxxxxxxxx> wrote:

>
>----- konrad.wilk@xxxxxxxxxx wrote:
>
>> On Wed, Sep 18, 2013 at 11:21:18AM +0000, Craig Carnell wrote:
>> > Hi,
>> > 
>> > I am trying out hiphop vm (the php just in time compiler). My setup
>> is a Rackspace Cloud Server running Ubuntu 13.04 with kernel
>> 3.8.0-30-generic #44-Ubuntu SMP Thu Aug 22 20:52:24 UTC 2013 x86_64
>> x86_64 x86_64 GNU/Linux
>> > 
>> > The cloud server uses Xen Hypervisor.
>> > 
>> > Hiphopvm is compiled from source using the github repo. When running
>> hhvm from the command line (without any options or php application)
>> the system immediately crashes, throwing linux into a kernel panic and
>> thus death.
>> > 
>> 
>> And what happens if you run 'perf' by itself?
>> 
>> 
>> > I have reported this issue on hiphop github issue page:
>> > 
>> > https://github.com/facebook/hiphop-php/issues/1065
>> > 
>> > I am not sure if this is a linux kernel bug or a xen hypervisor
>> bug:
>> > 
>> > The output of /var/log/syslog:
>> > 
>> > Sep 18 10:55:58 web kernel: [92118.674736] general protection fault:
>> 0000 [#1] SMP
>> > Sep 18 10:55:58 web kernel: [92118.674754] Modules linked in:
>> xenfs(F) xen_privcmd(F) xt_tcpudp(F) nf_conntrack_ipv4(F)
>> nf_defrag_ipv4(F) xt_state(F) nf_conntrack(F) xt_comment(F)
>> iptable_filter(F) ip_tables(F) x_tables(F) microcode(F) lp(F)
>> parport(F)
>> > Sep 18 10:55:58 web kernel: [92118.674781] CPU 0
>> > Sep 18 10:55:58 web kernel: [92118.674787] Pid: 5020, comm: hhvm
>> Tainted: GF            3.8.0-30-generic #44-Ubuntu
>> > Sep 18 10:55:58 web kernel: [92118.674795] RIP:
>> e030:[<ffffffff81003046>]  [<ffffffff81003046>]
>> native_read_pmc+0x6/0x20
>
>
>The link above seems to imply that this is a PV guest. RDPMC instruction
>is not currently emulated which would cause a #GP to the guest.
>
>I suspect that hhvm may be assuming that performance counters exist and
>this
>is not always the case.
>
>Can you post CPUID leaf 0xa if this is Intel processor and leaf 0x80000001
>if this is AMD (from the guest)? And 'dmesg | grep -i perf'.
>
>-boris
>
>
>> > Sep 18 10:55:58 web kernel: [92118.674809] RSP:
>> e02b:ffff8800026b9d20  EFLAGS: 00010083
>> > Sep 18 10:55:58 web kernel: [92118.674814] RAX: ffffffff81c1bd80
>> RBX: 0000000000000000 RCX: 0000000000000000
>> > Sep 18 10:55:58 web kernel: [92118.674819] RDX: 0000000000005f6c
>> RSI: ffff8800f7c81900 RDI: 0000000000000000
>> > Sep 18 10:55:58 web kernel: [92118.674824] RBP: ffff8800026b9d20
>> R08: 00000000000337d8 R09: ffff8800e933dcc0
>> > Sep 18 10:55:58 web kernel: [92118.674830] R10: 00007fff2d3caea0
>> R11: 0000000000000246 R12: ffff8800f87ecc00
>> > Sep 18 10:55:58 web kernel: [92118.674835] R13: ffff800000000001
>> R14: ffff8800f87ecd70 R15: 0000000000000010
>> > Sep 18 10:55:58 web kernel: [92118.674844] FS:
>> 00007f43d4c9b180(0000) GS:ffff8800ffc00000(0000)
>> knlGS:0000000000000000
>> > Sep 18 10:55:58 web kernel: [92118.674850] CS:  e033 DS: 0000 ES:
>> 0000 CR0: 000000008005003b
>> > Sep 18 10:55:58 web kernel: [92118.674855] CR2: 000000000105ebc0
>> CR3: 00000000025cd000 CR4: 0000000000000660
>> > Sep 18 10:55:58 web kernel: [92118.674861] DR0: 0000000000000000
>> DR1: 0000000000000000 DR2: 0000000000000000
>> > Sep 18 10:55:58 web kernel: [92118.674867] DR3: 0000000000000000
>> DR6: 00000000ffff0ff0 DR7: 0000000000000400
>> > Sep 18 10:55:58 web kernel: [92118.674872] Process hhvm (pid: 5020,
>> threadinfo ffff8800026b8000, task ffff8800f7cfc5c0)
>> > Sep 18 10:55:58 web kernel: [92118.674879] Stack:
>> > Sep 18 10:55:58 web kernel: [92118.674882]  ffff8800026b9d58
>> ffffffff81024625 0000000000000000 ffff8800f87ecc00
>> > Sep 18 10:55:58 web kernel: [92118.674893]  ffff8800f7c8190c
>> ffffffff811231a0 0000000000000005 ffff8800026b9d68
>> > Sep 18 10:55:58 web kernel: [92118.674902]  ffffffff81024689
>> ffff8800026b9d90 ffffffff811232a6 00000000ffff02ff
>> > Sep 18 10:55:58 web kernel: [92118.674911] Call Trace:
>> > Sep 18 10:55:58 web kernel: [92118.674920]  [<ffffffff81024625>]
>> x86_perf_event_update+0x55/0xb0
>> > Sep 18 10:55:58 web kernel: [92118.674929]  [<ffffffff811231a0>] ?
>> perf_read+0x2f0/0x2f0
>> > Sep 18 10:55:58 web kernel: [92118.674936]  [<ffffffff81024689>]
>> x86_pmu_read+0x9/0x10
>> > Sep 18 10:55:58 web kernel: [92118.674942]  [<ffffffff811232a6>]
>> __perf_event_read+0x106/0x110
>> > Sep 18 10:55:58 web kernel: [92118.674951]  [<ffffffff810b9987>]
>> smp_call_function_single+0x147/0x170
>> > Sep 18 10:55:58 web kernel: [92118.674959]  [<ffffffff811240d0>] ?
>> perf_mmap+0x2f0/0x2f0
>> > Sep 18 10:55:58 web kernel: [92118.674966]  [<ffffffff81122dda>]
>> perf_event_read+0x10a/0x110
>> > Sep 18 10:55:58 web kernel: [92118.674972]  [<ffffffff811240d0>] ?
>> perf_mmap+0x2f0/0x2f0
>> > Sep 18 10:55:58 web kernel: [92118.674979]  [<ffffffff811240dd>]
>> perf_event_reset+0xd/0x20
>> > Sep 18 10:55:58 web kernel: [92118.674987]  [<ffffffff8111ff08>]
>> perf_event_for_each_child+0x38/0xa0
>> > Sep 18 10:55:58 web kernel: [92118.674994]  [<ffffffff811240d0>] ?
>> perf_mmap+0x2f0/0x2f0
>> > Sep 18 10:55:58 web kernel: [92118.675001]  [<ffffffff8112255a>]
>> perf_ioctl+0xba/0x340
>> > Sep 18 10:55:58 web kernel: [92118.675009]  [<ffffffff811b1885>] ?
>> fd_install+0x25/0x30
>> > Sep 18 10:55:58 web kernel: [92118.675016]  [<ffffffff811a60e9>]
>> do_vfs_ioctl+0x99/0x570
>> > Sep 18 10:55:58 web kernel: [92118.675023]  [<ffffffff811a6651>]
>> sys_ioctl+0x91/0xb0
>> > Sep 18 10:55:58 web kernel: [92118.675031]  [<ffffffff816d575d>]
>> system_call_fastpath+0x1a/0x1f
>> > Sep 18 10:55:58 web kernel: [92118.675036] Code: 00 00 00 00 00 55
>> 89 f9 48 89 e5 0f 32 31 ff 89 c0 48 c1 e2 20 89 3e 48 09 c2 48 89 d0
>> 5d c3 66 0f 1f 44 00 00 55 89 f9 48 89 e5 <0f> 33 89 c0 48 c1 e2 20 48
>> 09 c2 48 89 d0 5d c3 66 2e 0f 1f 84
>> > Sep 18 10:55:58 web kernel: [92118.675103] RIP  [<ffffffff81003046>]
>> native_read_pmc+0x6/0x20
>> > Sep 18 10:55:58 web kernel: [92118.675110]  RSP <ffff8800026b9d20>
>> > Sep 18 10:55:58 web kernel: [92118.675118] ---[ end trace
>> 1a73231ba5f74716 ]---
>> > 
>> 
>> > _______________________________________________
>> > Xen-devel mailing list
>> > Xen-devel@xxxxxxxxxxxxx
>> > http://lists.xen.org/xen-devel
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-devel
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.