[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-bugs] [Bug 1558] New: Dom0 out of memory when mem-set its memory to 512M with 8G physical memory



http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1558

           Summary: Dom0 out of memory when mem-set its memory to 512M with
                    8G physical memory
           Product: Xen
           Version: unstable
          Platform: Other
        OS/Version: Linux
            Status: NEW
          Severity: normal
          Priority: P1
         Component: Hypervisor
        AssignedTo: xen-bugs@xxxxxxxxxxxxxxxxxxx
        ReportedBy: jiajun.xu@xxxxxxxxx


Environment:
------------
Service Arch (ia32/ia32e/IA64): ia32/ia32e
Guest Arch (ia32/ia32e/IA64): 
Guest OS Type (Linux/Windows):
Change Set: Xen-4.0.0-rc1 (20790)
Hardware: Stoakley
Other:

xen-changeset:   20790:0e3910f1de64

pvops git:
commit 60e0545e9649b08dd8ef5f2b991930049c40537e
Merge: b5624ab... 444c982...
Author: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>

ioemu git:
commit 3b531c2504c093ce716192e2abd410b418ae915a
Author: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Date:   Fri Jan 8 17:57:10 2010 +0000


Bug detailed description:
--------------------------
In Xen-4.0.0-rc1, when setting Dom0 memory to a relatively small value with
mem-set command, Dom0 will be out-of-memory very soon. On my Stoakley, it has
8G memory totally. When setting its memory to 512M, the issue will raise. And
it's a generic issue which also exists on other platform.

My steps:
[root@vt-dp6 ~]$ xm li
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  7944     8     r-----    430.0
[root@vt-dp6 ~]$ xm mem-set 0 512
[root@vt-dp6 ~]$ xm li
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   512     8     r-----    434.1

[root@vt-dp6 ~]$ xm info
host                   : vt-dp6
release                : 2.6.31.6
version                : #1 SMP Tue Jan 12 16:16:32 CST 2010
machine                : x86_64
nr_cpus                : 8
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 1
cpu_mhz                : 2800
hw_caps                :
bfebfbff:20100800:00000000:00000940:0004e3bd:00000000:00000001:00000000
virt_caps              : hvm hvm_directio
total_memory           : 8164
free_memory            : 6728
node_to_cpu            : node0:0-7
node_to_memory         : node0:6728
node_to_dma32_mem      : node0:3131
xen_major              : 4
xen_minor              : 0
xen_extra              : .0-rc1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          : Tue Jan 12 07:17:40 2010 +0000 20790:0e3910f1de64
xen_commandline        : console=com1,vga console=com1 com1=115200,8n1
loglvl=all guest_loglvl=all iommu=1 msi=1 iommu_inclusive_mapping=1
cc_compiler            : gcc version 4.1.2 20080704 (Red Hat 4.1.2-44)
cc_compile_by          : build
cc_compile_domain      : sh.intel.com
cc_compile_date        : Tue Jan 12 16:03:51 CST 2010
xend_config_format     : 4


Dom0 Dmesg Log:
##############
[  244.951161] mixer_applet2 invoked oom-killer: gfp_mask=0x200da, order=0,
oomkilladj=0
[  244.951183] Pid: 5112, comm: mixer_applet2 Not tainted 2.6.31.6 #1
[  244.951188] Call Trace:
[  244.951199]  [<ffffffff810a20ea>] ? cpuset_print_task_mems_allowed+0x8d/0x98
[  244.951207]  [<ffffffff810ce4fe>] oom_kill_process+0x88/0x21d
[  244.951214]  [<ffffffff810a10e2>] ? cpuset_mems_allowed_intersects+0x1c/0x1e
[  244.951221]  [<ffffffff810ce8cc>] ? badness+0x18f/0x1d3
[  244.951226]  [<ffffffff810ce94f>] __out_of_memory+0x3f/0x151
[  244.951232]  [<ffffffff810ceacf>] out_of_memory+0x6e/0xa5
[  244.951238]  [<ffffffff810d0f98>] __alloc_pages_nodemask+0x454/0x53c
[  244.951246]  [<ffffffff810eca2d>] read_swap_cache_async+0x48/0xe4
[  244.951252]  [<ffffffff810ecb21>] swapin_readahead+0x58/0x97
[  244.951260]  [<ffffffff810cbdb1>] ? find_get_page+0x1d/0x64
[  244.951295]  [<ffffffff810e1bb9>] handle_mm_fault+0x318/0x608
[  244.951303]  [<ffffffff81528f51>] do_page_fault+0x205/0x271
[  244.951309]  [<ffffffff81526f95>] page_fault+0x25/0x30
[  244.951316] DMA per-cpu:
[  244.951320] CPU    0: hi:    0, btch:   1 usd:   0
[  244.951324] CPU    1: hi:    0, btch:   1 usd:   0
[  244.951328] CPU    2: hi:    0, btch:   1 usd:   0
[  244.951332] CPU    3: hi:    0, btch:   1 usd:   0
[  244.951336] CPU    4: hi:    0, btch:   1 usd:   0
[  244.951340] CPU    5: hi:    0, btch:   1 usd:   0
[  244.951344] CPU    6: hi:    0, btch:   1 usd:   0
[  244.951348] CPU    7: hi:    0, btch:   1 usd:   0
[  244.951352] DMA32 per-cpu:
[  244.951355] CPU    0: hi:  186, btch:  31 usd:   0
[  244.951359] CPU    1: hi:  186, btch:  31 usd:   1
[  244.951363] CPU    2: hi:  186, btch:  31 usd:  30
[  244.951367] CPU    3: hi:  186, btch:  31 usd:   0
[  244.951372] CPU    4: hi:  186, btch:  31 usd:   0
[  244.951376] CPU    5: hi:  186, btch:  31 usd:   1
[  244.951380] CPU    6: hi:  186, btch:  31 usd:   2
[  244.951384] CPU    7: hi:  186, btch:  31 usd:   0
[  244.951387] Normal per-cpu:
[  244.951391] CPU    0: hi:  186, btch:  31 usd: 180
[  244.951395] CPU    1: hi:  186, btch:  31 usd: 155
[  244.951399] CPU    2: hi:  186, btch:  31 usd: 174
[  244.951403] CPU    3: hi:  186, btch:  31 usd:  72
[  244.951407] CPU    4: hi:  186, btch:  31 usd:  29
[  244.951411] CPU    5: hi:  186, btch:  31 usd: 110
[  244.951415] CPU    6: hi:  186, btch:  31 usd: 142
[  244.951419] CPU    7: hi:  186, btch:  31 usd:  46
[  244.951425] Active_anon:35 active_file:28 inactive_anon:12
[  244.951426]  inactive_file:33 unevictable:0 dirty:0 writeback:71 unstable:0
[  244.951427]  free:8845 slab:4530 mapped:70 pagetables:3238 bounce:0
[  244.951438] DMA free:9356kB min:12kB low:12kB high:16kB active_anon:0kB
inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB
present:9136kB pages_scanned:0 all_unreclaimable? yes
[  244.951446] lowmem_reserve[]: 0 3255 7055 7055
[  244.951457] DMA32 free:20052kB min:4956kB low:6192kB high:7432kB
active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB
unevictable:0kB present:3333344kB pages_scanned:0 all_unreclaimable? yes
[  244.951466] lowmem_reserve[]: 0 0 3800 3800
[  244.951477] Normal free:5972kB min:5784kB low:7228kB high:8676kB
active_anon:140kB inactive_anon:48kB active_file:112kB inactive_file:132kB
unevictable:0kB present:3891220kB pages_scanned:704 all_unreclaimable? yes
[  244.951486] lowmem_reserve[]: 0 0 0 0
[  244.951494] DMA: 3*4kB 2*8kB 3*16kB 4*32kB 3*64kB 2*128kB 0*256kB 1*512kB
0*1024kB 0*2048kB 2*4096kB = 9356kB
[  244.951514] DMA32: 7*4kB 5*8kB 5*16kB 8*32kB 7*64kB 4*128kB 3*256kB 3*512kB
4*1024kB 6*2048kB 0*4096kB = 20052kB
[  244.951534] Normal: 496*4kB 2*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB
0*512kB 0*1024kB 0*2048kB 1*4096kB = 6096kB
[  244.951553] 199 total pagecache pages
[  244.951557] 81 pages in swap cache
[  244.951560] Swap cache stats: add 28150, delete 28069, find 505/978
[  244.951565] Free swap  = 1950840kB
[  244.951568] Total swap = 2048276kB
[  244.987830] Out of memory: kill process 5071 (xend) score 19193 or a child
[  244.987867] Killed process 5072 (xend)
[  245.413250] gam_server invoked oom-killer: gfp_mask=0x200da, order=0,
oomkilladj=0
[  245.413270] Pid: 4941, comm: gam_server Not tainted 2.6.31.6 #1
[  245.413279] Call Trace:
[  245.413291]  [<ffffffff810a20ea>] ? cpuset_print_task_mems_allowed+0x8d/0x98
[  245.413299]  [<ffffffff810ce4fe>] oom_kill_process+0x88/0x21d
[  245.413307]  [<ffffffff810a10e2>] ? cpuset_mems_allowed_intersects+0x1c/0x1e
[  245.413313]  [<ffffffff810ce8cc>] ? badness+0x18f/0x1d3
[  245.413320]  [<ffffffff810ce94f>] __out_of_memory+0x3f/0x151
[  245.413325]  [<ffffffff810ceacf>] out_of_memory+0x6e/0xa5
[  245.413331]  [<ffffffff810d0f98>] __alloc_pages_nodemask+0x454/0x53c
[  245.413339]  [<ffffffff810eca2d>] read_swap_cache_async+0x48/0xe4
[  245.413345]  [<ffffffff810ecb21>] swapin_readahead+0x58/0x97
[  245.413352]  [<ffffffff810cbdb1>] ? find_get_page+0x1d/0x64
[  245.413380]  [<ffffffff810e1bb9>] handle_mm_fault+0x318/0x608
[  245.413388]  [<ffffffff81528f51>] do_page_fault+0x205/0x271
[  245.413393]  [<ffffffff81526f95>] page_fault+0x25/0x30
.......
###########


-- 
Configure bugmail: 
http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

_______________________________________________
Xen-bugs mailing list
Xen-bugs@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-bugs


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.