[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 0/3] x86: modify_ldt improvement, test, and config option



On 07/28/2015 08:47 PM, Andrew Cooper wrote:
On 29/07/2015 01:21, Andy Lutomirski wrote:
On Tue, Jul 28, 2015 at 10:10 AM, Boris Ostrovsky
<boris.ostrovsky@xxxxxxxxxx> wrote:
On 07/28/2015 01:07 PM, Andy Lutomirski wrote:
On Tue, Jul 28, 2015 at 9:30 AM, Andrew Cooper
<andrew.cooper3@xxxxxxxxxx> wrote:
I suspect that the set_ldt(NULL, 0) call hasn't reached Xen before
xen_free_ldt() is attempting to nab back the pages which Xen still has
mapped as an LDT.

I just instrumented it with yet more LSL instructions.  I'm pretty
sure that set_ldt really is clearing at least LDT entry zero.
Nonetheless the free_ldt call still oopses.

Yes, I added some instrumentation to the hypervisor and we definitely set
LDT to NULL before failing.

-boris
Looking at map_ldt_shadow_page: what keeps shadow_ldt_mapcnt from
getting incremented once on each CPU at the same time if both CPUs
fault in the same shadow LDT page at the same time?
Nothing, but that is fine.  If a page is in use in two vcpus LDTs, it is
expected to have a type refcount of 2.

Similarly, what
keeps both CPUs from calling get_page_type at the same time and
therefore losing track of the page type reference count?
a cmpxchg() loop in the depths of __get_page_type().

I don't see why vmalloc or vm_unmap_aliases would have anything to do
with this, though.

So just for kicks I made lazy_max_pages() return 0 to free vmaps immediately and the problem went away.

I also saw this warning, BTW:

[  178.686542] ------------[ cut here ]------------
[ 178.686554] WARNING: CPU: 0 PID: 16440 at ./arch/x86/include/asm/mmu_context.h:96 load_mm_ldt+0x70/0x76()
[  178.686558] DEBUG_LOCKS_WARN_ON(!irqs_disabled())
[  178.686561] Modules linked in:
[ 178.686566] CPU: 0 PID: 16440 Comm: kworker/u2:1 Not tainted 4.1.0-32b #80 [ 178.686570] 00000000 00000000 ea4e3df8 c1670e71 00000000 ea4e3e28 c106ac1e c1814e43 [ 178.686577] ea4e3e54 00004038 c181bc2c 00000060 c166fd3b c166fd3b e6705dc0 00000000 [ 178.686583] ea665000 ea4e3e40 c106ad03 00000009 ea4e3e38 c1814e43 ea4e3e54 ea4e3e5c
[  178.686589] Call Trace:
[  178.686594]  [<c1670e71>] dump_stack+0x41/0x52
[  178.686598]  [<c106ac1e>] warn_slowpath_common+0x8e/0xd0
[  178.686602]  [<c166fd3b>] ? load_mm_ldt+0x70/0x76
[  178.686609]  [<c166fd3b>] ? load_mm_ldt+0x70/0x76
[  178.686612]  [<c106ad03>] warn_slowpath_fmt+0x33/0x40
[  178.686615]  [<c166fd3b>] load_mm_ldt+0x70/0x76
[  178.686619]  [<c11ad5e9>] flush_old_exec+0x6f9/0x750
[  178.686626]  [<c11efb54>] load_elf_binary+0x2b4/0x1040
[  178.686630]  [<c1173785>] ? page_address+0x15/0xf0
[  178.686633]  [<c106466f>] ? kunmap+0x1f/0x70
[  178.686636]  [<c11ac819>] search_binary_handler+0x89/0x1c0
[  178.686639]  [<c11add40>] do_execveat_common+0x4c0/0x620
[  178.686653]  [<c11673e3>] ? kmemdup+0x33/0x50
[  178.686659]  [<c10c5e3b>] ? __call_rcu.constprop.66+0xbb/0x220
[  178.686673]  [<c11adec4>] do_execve+0x24/0x30
[  178.686679]  [<c107c0be>] ____call_usermodehelper+0xde/0x120
[  178.686684]  [<c1677501>] ret_from_kernel_thread+0x21/0x30
[  178.686696]  [<c107bfe0>] ? __request_module+0x240/0x240
[  178.686701] ---[ end trace 8b3f5341f50e6c88 ]---


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.