[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Bisected Xen-unstable: "Segment register inaccessible for d1v0" when starting HVM guest on intel




> -----Original Message-----
> From: Andrew Cooper [mailto:andrew.cooper3@xxxxxxxxxx]
> Sent: Tuesday, July 01, 2014 1:31 AM
> To: Sander Eikelenboom; Jan Beulich
> Cc: Wu, Feng; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] Bisected Xen-unstable: "Segment register inaccessible
> for d1v0" when starting HVM guest on intel
> 
> On 30/06/14 17:37, Sander Eikelenboom wrote:
> > Monday, June 30, 2014, 5:45:40 PM, you wrote:
> >
> >>>>> On 28.06.14 at 22:21, <linux@xxxxxxxxxxxxxx> wrote:
> >>> On intel machines when starting a HVM guest with qemu upstream i get:
> >>>
> >>> (d2) [2014-06-27 20:07:46] Booting from Hard Disk...
> >>> (d2) [2014-06-27 20:07:46] Booting from 0000:7c00
> >>> (XEN) [2014-06-27 20:08:00] irq.c:380: Dom1 callback via changed to Direct
> >>> Vector 0xf3
> >>> (XEN) [2014-06-27 20:08:00] irq.c:380: Dom2 callback via changed to Direct
> >>> Vector 0xf3
> >>> (XEN) [2014-06-27 20:08:03] Segment register inaccessible for d1v0
> >>> (XEN) [2014-06-27 20:08:03] (If you see this outside of debugging 
> >>> activity,
> >>> please report to xen-devel@xxxxxxxxxxxxxxxxxxxx)
> >> Could you put a dump_execution_state() alongside the respective
> >> printk(), so we can see one what path(s) this is actually happening?
> >> Thanks, Jan
> > Hi Jan,
> >
> > Sure see below (complete xl-dmesg attached)
> >
> > --
> > Sander
> >
> > (XEN) [2014-06-30 16:33:12] irq.c:380: Dom2 callback via changed to Direct
> Vector 0xf3
> > (XEN) [2014-06-30 16:33:14] Segment register inaccessible for d2v0
> > (XEN) [2014-06-30 16:33:14] (If you see this outside of debugging activity,
> please report to xen-devel@xxxxxxxxxxxxxxxxxxxx)
> > (XEN) [2014-06-30 16:33:14] ----[ Xen-4.5-unstable  x86_64  debug=y  Not
> tainted ]----
> > (XEN) [2014-06-30 16:33:14] CPU:    2
> > (XEN) [2014-06-30 16:33:14] RIP:    e008:[<ffff82d0801dc9c5>]
> vmx_get_segment_register+0x4d/0x422
> > (XEN) [2014-06-30 16:33:14] RFLAGS: 0000000000010286   CONTEXT:
> hypervisor
> > (XEN) [2014-06-30 16:33:14] rax: 0000000000000000   rbx:
> ffff830218537b18   rcx: 0000000000000000
> > (XEN) [2014-06-30 16:33:14] rdx: ffff83021853c020   rsi:
> 000000000000000a   rdi: ffff82d08028f6c0
> > (XEN) [2014-06-30 16:33:14] rbp: ffff830218537ad0   rsp: ffff830218537a90
> r8:  ffff830218588000
> > (XEN) [2014-06-30 16:33:14] r9:  0000000000000002   r10:
> 000000000000000e   r11: 0000000000000002
> > (XEN) [2014-06-30 16:33:14] r12: ffff8300dc8f8000   r13:
> 0000000000000001   r14: 00000000007ff000
> > (XEN) [2014-06-30 16:33:14] r15: 00000000f5f1f880   cr0:
> 000000008005003b   cr4: 00000000001526f0
> > (XEN) [2014-06-30 16:33:14] cr3: 0000000215c7b000   cr2:
> 00000000ffc35000
> > (XEN) [2014-06-30 16:33:14] ds: 0000   es: 0000   fs: 0000   gs: 0000
> ss: 0000   cs: e008
> > (XEN) [2014-06-30 16:33:14] Xen stack trace from rsp=ffff830218537a90:
> > (XEN) [2014-06-30 16:33:14]    000000000000177f 0000000000000000
> ffff830218537af0 ffff830218537ba8
> > (XEN) [2014-06-30 16:33:14]    ffff8300dc8f8000 0000000000000003
> 00000000007ff000 00000000f5f1f880
> > (XEN) [2014-06-30 16:33:14]    ffff830218537b60 ffff82d0801f4415
> ffff830218537b2c ffff830209dd8000
> > (XEN) [2014-06-30 16:33:14]    ffff830218537b60 ffff83020a5e2930
> ffff830218530000 007ff00300209fc7
> > (XEN) [2014-06-30 16:33:14]    ffff830209dd8000 0000000318537b48
> ffff82d0801eeb08 0000000700000000
> > (XEN) [2014-06-30 16:33:14]    0000000000000001 ffff83020a5e2930
> ffff82e0041eafe0 000000000020f57f
> > (XEN) [2014-06-30 16:33:14]    ffff830218537ccc 000000000177f000
> ffff830218537c10 ffff82d0802204a8
> > (XEN) [2014-06-30 16:33:14]    ffff83020f57f000 ffff830218537cf4
> ffff83020f57f000 0000000000000000
> > (XEN) [2014-06-30 16:33:14]    00000000f5f1f880 ffff8300dc8f8000
> ffff830218537c00 00000000f5f1f880
> > (XEN) [2014-06-30 16:33:14]    0000000000000000 0000000000000000
> 0000000000000000 0000000000000000
> > (XEN) [2014-06-30 16:33:14]    0000000000000000 0000000000000082
> 0000000215d8d000 ffff8300dc8f8000
> > (XEN) [2014-06-30 16:33:14]    ffff830218537ccc ffff82d080281200
> ffff83020a5e2930 00000000f5f1f880
> > (XEN) [2014-06-30 16:33:14]    ffff830218537c20 ffff82d08022062e
> ffff830218537c80 ffff82d0801ec215
> > (XEN) [2014-06-30 16:33:14]    ffff830218537c60 ffff82d080129c6a
> ffff8300db453000 ffff83021853ce50
> > (XEN) [2014-06-30 16:33:14]    0000000000000000 00000000000f5f1f
> ffff8300dbdf7000 000000000000002c
> > (XEN) [2014-06-30 16:33:14]    ffff83021853c068 ffff8300dc8f8000
> ffff830218537d10 ffff82d0801ba88d
> > (XEN) [2014-06-30 16:33:14]    ffff830218537d60 ffff830218530000
> 00000005802f7f00 ffff830218537d70
> > (XEN) [2014-06-30 16:33:14]    ffff830218537d54 00000000f5f1f880
> 0000000000000880 000000030000002c
> > (XEN) [2014-06-30 16:33:14]    ffff830218537ce0 ffff82d080184208
> ffff830218537d50 000000000000002c
> > (XEN) [2014-06-30 16:33:14]    ffff8300dbdf7000 0000000000000002
> ffff83021853c068 0000000000000001
> > (XEN) [2014-06-30 16:33:14] Xen call trace:
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0801dc9c5>]
> vmx_get_segment_register+0x4d/0x422
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0801f4415>]
> guest_walk_tables_3_levels+0x189/0x520
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0802204a8>]
> hap_p2m_ga_to_gfn_3_levels+0x158/0x2c2
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d08022062e>]
> hap_gva_to_gfn_3_levels+0x1c/0x1e
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0801ec215>]
> paging_gva_to_gfn+0xb8/0xce
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0801ba88d>]
> __hvm_copy+0x87/0x354
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0801bac7c>]
> hvm_copy_to_guest_virt_nofault+0x1e/0x20
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0801bace5>]
> copy_to_user_hvm+0x67/0x87
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d08016237c>]
> update_runstate_area+0x98/0xfb
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0801623f0>]
> _update_runstate_area+0x11/0x39
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0801634db>]
> context_switch+0x10c3/0x10fa
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d080126a19>]
> schedule+0x5a8/0x5da
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d0801297f9>]
> __do_softirq+0x81/0x8c
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d080129852>] do_softirq+0x13/0x15
> > (XEN) [2014-06-30 16:33:14]    [<ffff82d08015f70a>] idle_loop+0x67/0x77
> > (XEN) [2014-06-30 16:33:14]
> > (XEN) [2014-06-30 16:33:15] irq.c:270: Dom2 PCI link 0 changed 5 -> 0
> > (XEN) [2014-06-30 16:33:15] irq.c:270: Dom2 PCI link 1 changed 10 -> 0
> > (XEN) [2014-06-30 16:33:15] irq.c:270: Dom2 PCI link 2 changed 11 -> 0
> > (XEN) [2014-06-30 16:33:15] irq.c:270: Dom2 PCI link 3 changed 5 -> 0
> > (XEN) [2014-06-30 16:33:37] grant_table.c:295:d0v0 Increased maptrack size
> to 2 frames
> > (XEN) [2014-06-30 16:33:37] grant_table.c:295:d0v0 Increased maptrack size
> to 3 frames
> 
> 
> Right - I see the problem, but the fix is not obvious.
> 
> During a context switch (where the vmcs is unavailable), we try to
> update the the runstate area.
> 
> Following the identified changeset, we unconditionally try to read ss
> for an hvm vcpu supervisor access, but in this case we don't actually
> have a pagefault.
> 
> I think there might need to be a distinction between "Xen is walking the
> guest pagetables because of a fault", and "Xen is walking the guest
> pagetables in an attempt to copy_to/from_guest".  Neither SMEP or SMAP
> have any business being checked for a Xen accesses; the current vcpu
> operating mode has no bearing on whether Xen should be able to update
> the runstate info.
> 
> ~Andrew

Seems we cannot get the guest SS here by hvm_get_segment_register(), since in 
this case this function will be called between setting 'current' and 
vmx_do_resume(). Is the following solution okay to solve this issue:

1. Store GUEST_SS to regs->ss in vmx_vmexit_handler() just like what has been 
done for GUEST_RIP/ GUEST_RSP/ GUEST_RFLAGS.
2. Get the guest SS from struct cpu_user_regs in guest_walk_tables()

Thanks,
Feng

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.