[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V3 PATCH 9/9] pvh dom0: add opt_dom0pvh to setup.c



On 03/12/13 03:33, Mukesh Rathor wrote:
> On Mon, 2 Dec 2013 12:46:49 -0800
> Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> 
>> On Mon, 2 Dec 2013 12:38:12 -0800
>> Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
>>
>>> On Mon, 2 Dec 2013 20:38:26 +0100
>>> Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:
>>>
>>>> On 02/12/13 20:30, Mukesh Rathor wrote:
>>>>> On Mon, 2 Dec 2013 16:09:50 +0100
>>>>> Roger Pau Monnà <roger.pau@xxxxxxxxxx> wrote:
>>>>>
>>> ......
>>>> Thanks for the input, I haven't been able to do much debugging,
>>>> but AFAICT the problem comes from alloc_vcpu_guest_context
>>>> returning NULL, because alloc_domheap_page(NULL, 0) inside of
>>>> that function also returned NULL (not able to allocate a page).
>>>>
>>>> I'm currently using your tree for both Linux (branch tmp2) and Xen
>>>> (branch dom0pvh-v3), no added or removed patches (only the line
>>>> mentioned above in order to boot Dom0 in PVH mode).
>>>>
>>>> Can you confirm you are able to boot a PVH Dom0 from you Xen and
>>>> Linux trees?
>>>
>>> yes, I'd not be submitting the patches otherwise ;)..
>>>
>>> Name                                        ID   Mem VCPUs
>>> State   Time(s) Domain-0                                     0
>>> 1200     3     r-----    6250.6
>>
>> FWIW, here's my command line:
>>
>>         kernel /xen.hybrid.kdb console=com1,vga com1=57600,8n1
>> dom0_mem=1200M maxcpus=4 dom0_max_vcpus=3 noreboot sync_console
>> dom0pvh module /bzImage.hyb ro root=/dev/sda10 console=tty
>> console=hvc0,57600n8
>>
>>
>> Something is going with memory on your system looks like, the line 
>>
>>     Released 18446744073708661104 pages of unused memory
>>
>> is worrysome. May be start with dom0_mem and see if that makes a
>> difference?
>>
>> OTOH, after I am done cranking out another dom0 version with Jan's
>> comments, I'll try out on my system without dom0_mem specified. That
>> make take a day or two.
> 
> 
> Roger, 
> 
> I'm able to reproduce by not specifying dom0_mem option. Will debug
> linux tomorrow to see what's going on. JFYI.

Thanks, I've tried with dom0_mem and it seems to work, but I've also 
found that rebooting a PVH Dom0 triggers the following assert (this also 
happens with your new dom0pvh-v4 branch):

(XEN) Domain 0 shutdown: rebooting machine.
(XEN) Assertion 'read_cr0() & X86_CR0_TS' failed at vmx.c:595
(XEN) ----[ Xen-4.4-unstable  x86_64  debug=y  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d0801d8992>] vmx_ctxt_switch_from+0x20/0x155
(XEN) RFLAGS: 0000000000010046   CONTEXT: hypervisor
(XEN) rax: 0000000080050033   rbx: ffff8300df8fb000   rcx: 0000000000000000
(XEN) rdx: 00000000ffffffff   rsi: ffff82d0802cffc0   rdi: ffff8300df8fb000
(XEN) rbp: ffff82d0802cfa58   rsp: ffff82d0802cfa48   r8:  0000000000000000
(XEN) r9:  ffff82cffffff000   r10: ffff83019a5c1f50   r11: 00000016eba95224
(XEN) r12: ffff8300df8fb000   r13: 0000000000000000   r14: ffff82d0802cff18
(XEN) r15: ffff83019a5c75f8   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000019a765000   cr2: 00007f9f2861b000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff82d0802cfa48:
(XEN)    ffff8300dfdf2000 ffff8300dfdf2000 ffff82d0802cfaa8 ffff82d08015f8a0
(XEN)    ffff82d0802cfac8 ffff82d0801730c2 ffff82d0802cfac8 0000000000000001
(XEN)    0000000000000046 000000000019a400 0000000000000012 ffff83019a5c75f8
(XEN)    ffff82d0802cfac8 ffff82d08015fbd1 ffff8300dfdf2000 0000000000000000
(XEN)    ffff82d0802cfad8 ffff82d08015fbf6 ffff82d0802cfb28 ffff82d08016239b
(XEN)    ffff82d0802cfb18 ffff82d080140ad3 0000000000000002 ffff83019a5c7530
(XEN)    0000000000000000 0000000000000000 0000000000000012 ffff83019a5c75f8
(XEN)    ffff82d0802cfb38 ffff82d08014f1f3 ffff82d0802cfb98 ffff82d08014e304
(XEN)    0000000000000093 ffff83019a5c7604 0001000000000021 0000000400000000
(XEN)    000000000000006b 0000000000000000 0000000000000001 0000000000000000
(XEN)    ffff82d080165e21 00000000ffffffff ffff82d0802cfba8 ffff82d080144e1a
(XEN)    ffff82d0802cfbb8 ffff82d080165e38 ffff82d0802cfbe8 ffff82d080165920
(XEN)    0000000000000000 0000000000000001 0000000000000000 0000000000000000
(XEN)    ffff82d0802cfc18 ffff82d080166c76 0000000000010000 0000000000000000
(XEN)    0000000000000001 ffff82d08026fd20 ffff82d0802cfc48 ffff82d080166d82
(XEN)    0000000000000009 ffff82d080274080 00000000000000fb ffff83019a577ad0
(XEN)    ffff82d0802cfc68 ffff82d0801670a6 ffff82d0802cfc68 ffff82d08018268d
(XEN)    ffff82d0802cfc88 ffff82d080182715 0000000000000000 ffff82d0802cfdb8
(XEN)    ffff82d0802cfcd8 ffff82d08018239c ffff82d0801405e6 0000000000000061
(XEN)    ffff82d0802cfce8 0000000000000000 ffff82d0802cfdb8 00000000000000fb
(XEN) Xen call trace:
(XEN)    [<ffff82d0801d8992>] vmx_ctxt_switch_from+0x20/0x155
(XEN)    [<ffff82d08015f8a0>] __context_switch+0xa0/0x36c
(XEN)    [<ffff82d08015fbd1>] __sync_local_execstate+0x65/0x81
(XEN)    [<ffff82d08015fbf6>] sync_local_execstate+0x9/0xb
(XEN)    [<ffff82d08016239b>] map_domain_page+0x86/0x51a
(XEN)    [<ffff82d08014f1f3>] map_vtd_domain_page+0xd/0x1a
(XEN)    [<ffff82d08014e304>] io_apic_read_remap_rte+0x156/0x2a2
(XEN)    [<ffff82d080144e1a>] iommu_read_apic_from_ire+0x29/0x30
(XEN)    [<ffff82d080165e38>] io_apic_read+0x17/0x65
(XEN)    [<ffff82d080165920>] __ioapic_read_entry+0x3d/0x69
(XEN)    [<ffff82d080166c76>] clear_IO_APIC_pin+0x1a/0xf3
(XEN)    [<ffff82d080166d82>] clear_IO_APIC+0x33/0x62
(XEN)    [<ffff82d0801670a6>] disable_IO_APIC+0xd/0x89
(XEN)    [<ffff82d080182715>] smp_send_stop+0x5b/0x66
(XEN)    [<ffff82d08018239c>] machine_restart+0x84/0x225
(XEN)    [<ffff82d080182548>] __machine_restart+0xb/0xd
(XEN)    [<ffff82d080128dc9>] smp_call_function_interrupt+0xa5/0xca
(XEN)    [<ffff82d080182b08>] call_function_interrupt+0x33/0x35
(XEN)    [<ffff82d08016b988>] do_IRQ+0x9e/0x660
(XEN)    [<ffff82d0801645df>] common_interrupt+0x5f/0x70
(XEN)    [<ffff82d0801a7981>] mwait_idle+0x2ad/0x30e
(XEN)    [<ffff82d08015ed42>] idle_loop+0x64/0x7e
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion 'read_cr0() & X86_CR0_TS' failed at vmx.c:595
(XEN) ****************************************

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.