[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Can Linux HVM domU use x2apic?


  • To: xen-users@xxxxxxxxxxxxx
  • From: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
  • Date: Wed, 17 Apr 2013 16:13:13 +0200
  • Delivery-date: Wed, 17 Apr 2013 14:15:07 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>

Hello,

I wanted to try x2apic instead of the default callbacks on HVM domU.

This was somewhat achieved by adding "xen_have_vector_callback = 0" into
xen_hvm_guest_init ...
- Patched 2.6.32 kernel enabled x2apic, but the domain xen_panicked;
  when booted with nox2apic flag, everything was ok.
- Upstream kernel just went to safe_halt, which made me doubt sanity of
  this hack.
Disabling Xen driver led to a bootable kernel, but x2apic did not
activate.

Does the x2apic work under some circumstances?

Thanks.


---
This happened on 4.2-f18 and 4.3-unstable hypervisors with a patched
2.6.32-RHEL6 and 3.9-rc6 kernel. (Additional patches from upstream were
required for x2apic in RHEL6)
CPU was sandy bridge; dom0 used x2apic; bootable guests claimed x2apic
capability.
Xenctx is below.


--- patched 2.6.32-367.el6 ---

rip: ffffffff81013850 native_read_tsc 
flags: 00000246 i z p
rsp: ffff88007e8e1d68
rax: 000000006d1a47a0   rcx: 000000006d1a47a0   rdx: 0000000000008edc
rbx: 0000000000000000   rsi: 00000000001c7368   rdi: 000000000002d859
rbp: ffff88007e8e1d90    r8: 0000000000000001    r9: 0000000000000000
r10: 0000000000000000   r11: 0000000000000001   r12: 000000006d19b8c4
r13: 0000000000000000   r14: 000000000002d859   r15: 0000000000000000
 cs: 0010        ss: 0018        ds: 0018        es: 0018
 fs: 0000 @ 0000000000000000
 gs: 0000 @ ffff880002200000/0000000000000000
Code (instr addr ffffffff81013850)
c3 0f 1f 40 00 55 89 f0 48 89 e5 e6 70 89 f8 e6 71 c9 c3 66 90 <55> 48 89 e5 0f 
31 89 c1 48 89 d0 


Stack:
 ffffffff812826ca 0000000000000002 0000000000000001 0000000000000000
 0000000000000004 ffff88007e8e1da0 ffffffff81282676 ffff88007e8e1e50
 ffffffff815087ad ffff8800022318c0 0000000000000000 00000606ffffffe9
 0000000000000003 0000000000000000 ffff88007e8e1dd8 ffff88007e8e1dd8

Call Trace:
  [<ffffffff81013850>] native_read_tsc  <--
  [<ffffffff812826ca>] delay_tsc+0x4a 
  [<ffffffff81282676>] __const_udelay+0x46 
  [<ffffffff815087ad>] native_cpu_up+0x84f 
  [<ffffffff81508c02>] do_fork_idle 
  [<ffffffff8150af80>] _cpu_up+0xa1 
  [<ffffffff8150b0dc>] cpu_up+0xea 
  [<ffffffff81c27797>] kernel_init+0x169 
  [<ffffffff8100c0ca>] child_rip+0xa 
  [<ffffffff81c2762e>] kernel_init 
  [<ffffffff8100c0c0>] child_rip 
rip: ffffffff810013a8 hypercall_page+0x3a8 
flags: 00000086 s nz p
rsp: ffff88007e927ab0
rax: 0000000000000000   rcx: ffffffff81a8eb10   rdx: ffffffff81e2c980
rbx: 0000000000000000   rsi: ffff88007e927ab8   rdi: 0000000000000002
rbp: ffff88007e927ac8    r8: 0000000000000000    r9: 00000000fffffffa
r10: 0000000000000002   r11: 0000000000000002   r12: ffffffff81e2c980
r13: 00000000fffffffe   r14: 0000000000000000   r15: 0000000000000000
 cs: 0010        ss: 0018        ds: 0018        es: 0018
 fs: 0000 @ 0000000000000000
 gs: 0000 @ ffff880002220000/0000000000000000
Code (instr addr ffffffff810013a8)
cc cc cc cc cc cc cc cc cc cc cc cc cc b8 1d 00 00 00 0f 01 c1 <c3> cc cc cc cc 
cc cc cc cc cc cc 


Stack:
 ffffffff810033c2 ffff880000000003 ffffffff8103473e ffff88007e927b08
 ffffffff81515925 ffff88007e927ae8 ffffffff8178fe2a ffff88007e927dd8
 0000000000000009 ffff88007e925540 0000000000000046 ffff88007e927b18
 ffffffff8151598a ffff88007e927b98 ffffffff8150f741 ffff880000000010

Call Trace:
  [<ffffffff810013a8>] hypercall_page+0x3a8  <--
  [<ffffffff810033c2>] xen_panic_event+0x22 
  [<ffffffff8103473e>] native_apic_msr_read+0x2e 
  [<ffffffff81515925>] notifier_call_chain+0x55 
  [<ffffffff8151598a>] atomic_notifier_call_chain+0x1a 
  [<ffffffff8150f741>] panic+0xd2 
  [<ffffffff81513944>] oops_end+0xe4 
  [<ffffffff81046bfb>] no_context+0xfb 
  [<ffffffff81046e85>] __bad_area_nosemaphore+0x125 
  [<ffffffff81046f53>] bad_area_nosemaphore+0x13 
  [<ffffffff810476b1>] __do_page_fault+0x321 
  [<ffffffff81503579>] init_intel_cacheinfo+0x123 
  [<ffffffff8151586e>] do_page_fault+0x3e 
  [<ffffffff81512c25>] page_fault+0x25 
  [<ffffffff81505b70>] init_intel+0x14d 
  [<ffffffff81505b65>] init_intel+0x142 
  [<ffffffff81505663>] identify_cpu+0x25a 
  [<ffffffff815057bd>] identify_secondary_cpu+0x16 
  [<ffffffff81508fbc>] smp_store_cpu_info+0x68 
  [<ffffffff815090de>] start_secondary+0x120 



--- patched 3.9-rc6 ---

Call Trace:
  [<ffffffff81044ff6>] native_safe_halt+0x6  <--
  [<ffffffff8101c641>] default_idle+0x41
  [<ffffffff8101d16e>] cpu_idle+0xfe
  [<ffffffff815faa92>] rest_init+0x72
  [<ffffffff81a2bee9>] start_kernel+0x3fd
  [<ffffffff81a2b8eb>] unknown_bootoption
  [<ffffffff81a2b5e4>] x86_64_start_reservations+0x2a
  [<ffffffff81a2b6d7>] x86_64_start_kernel+0xf1

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.