[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Page allocation failure



Dear list,

I am seeing an error across multiple machines during heavy I/O, either disk or network. The VMs are on different Intel CPUs (Core 2 Quad, Core i5, Xeon) with varying boards (Abit, Asus, Supermicro).

Machines that get this error are running either BackupPC, Zabbix (MySQL) or SABNZBd. I can also reproduce the error on the Supermicros with a looped wget of an ubuntu ISO as they are on Gbit ethernet and will download at ~70MB/s.

I prefer Debian, however whilst troubleshooting I have used SUSE kernel with Debian and CentOS. I have compiled kernels from 2.6.3x to 3.0.x and Xen 4.0.1 to 4.1, and have yet to shake the issue.

The following is an output of the error. This is usually repeated several times. There is plenty of RAM available, usually 1GB+ is used in buffers/cache, but ~1GB of swap is also available should it be needed.

Its worth noting that the "swapper" portion is usually a random process. I've seen it say "mingetty", "rsyslog" amongst others. However "swapper" is the most common.

[488054.896037] swapper: page allocation failure. order:0, mode:0x20
[488054.896056] Pid: 0, comm: swapper Not tainted 2.6.32-5-amd64 #1
[488054.896067] Call Trace:
[488054.896074]<IRQ>   [<ffffffff810ba5d6>] ? __alloc_pages_nodemask+0x592/0x5f4
[488054.896100]  [<ffffffff810e6912>] ? new_slab+0x5b/0x1ca
[488054.896112]  [<ffffffff810e6c71>] ? __slab_alloc+0x1f0/0x39b
[488054.896124]  [<ffffffff8124851e>] ? __alloc_skb+0x3e/0x15a
[488054.896135]  [<ffffffff8124851e>] ? __alloc_skb+0x3e/0x15a
[488054.896146]  [<ffffffff810e704d>] ? kmem_cache_alloc_node+0x8b/0x10b
[488054.896158]  [<ffffffff8124851e>] ? __alloc_skb+0x3e/0x15a
[488054.896170]  [<ffffffff8128b0e7>] ? tcp_delack_timer+0x0/0x1fe
[488054.896182]  [<ffffffff812880fd>] ? tcp_send_ack+0x23/0xf4
[488054.896193]  [<ffffffff8128b275>] ? tcp_delack_timer+0x18e/0x1fe
[488054.896207]  [<ffffffff8105a5ff>] ? run_timer_softirq+0x1c9/0x268
[488054.896220]  [<ffffffff81094a9f>] ? handle_IRQ_event+0x117/0x126
[488054.896234]  [<ffffffff81053cab>] ? __do_softirq+0xdd/0x1a6
[488054.896245]  [<ffffffff81096154>] ? handle_percpu_irq+0x4e/0x6a
[488054.896258]  [<ffffffff81011cac>] ? call_softirq+0x1c/0x30
[488054.896269]  [<ffffffff8101322b>] ? do_softirq+0x3f/0x7c
[488054.896280]  [<ffffffff81053b1b>] ? irq_exit+0x36/0x76
[488054.896291]  [<ffffffff811ef48c>] ? xen_evtchn_do_upcall+0x33/0x41
[488054.896304]  [<ffffffff81011cfe>] ? xen_do_hypervisor_callback+0x1e/0x30
[488054.896313]<EOI>   [<ffffffff810093aa>] ? hypercall_page+0x3aa/0x1001
[488054.896330]  [<ffffffff810093aa>] ? hypercall_page+0x3aa/0x1001
[488054.896343]  [<ffffffff8100e160>] ? xen_vcpuop_set_next_event+0x0/0x60
[488054.896354]  [<ffffffff8100dcb3>] ? xen_safe_halt+0xc/0x15
[488054.896366]  [<ffffffff8100be63>] ? xen_idle+0x37/0x40
[488054.896377]  [<ffffffff8100feb1>] ? cpu_idle+0xa2/0xda
[488054.896388]  [<ffffffff814f5cdd>] ? start_kernel+0x3dc/0x3e8
[488054.896397] Mem-Info:
[488054.896404] Node 0 DMA per-cpu:
[488054.896413] CPU    0: hi:    0, btch:   1 usd:   0
[488054.896422] CPU    1: hi:    0, btch:   1 usd:   0
[488054.896431] CPU    2: hi:    0, btch:   1 usd:   0
[488054.896439] CPU    3: hi:    0, btch:   1 usd:   0
[488054.896447] Node 0 DMA32 per-cpu:
[488054.896456] CPU    0: hi:  186, btch:  31 usd: 185
[488054.896465] CPU    1: hi:  186, btch:  31 usd: 172
[488054.896473] CPU    2: hi:  186, btch:  31 usd: 166
[488054.896482] CPU    3: hi:  186, btch:  31 usd: 148
[488054.896494] active_anon:89704 inactive_anon:31826 isolated_anon:0
[488054.896497]  active_file:27715 inactive_file:277856 isolated_file:0
[488054.896499]  unevictable:0 dirty:30010 writeback:0 unstable:0
[488054.896501]  free:2220 slab_reclaimable:11510 slab_unreclaimable:1813
[488054.896503]  mapped:2981 shmem:102 pagetables:1274 bounce:0
[488054.896532] Node 0 DMA free:6996kB min:36kB low:44kB high:52kB 
active_anon:0kB inactive_anon:152kB active_file:56kB inactive_file:5148kB 
unevictable:0kB isolated(anon):0kB isolated(file):0kB present:12304kB 
mlocked:0kB dirty:0kB writeback:0kB mapped:32kB shmem:0kB 
slab_reclaimable:100kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB 
unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
[488054.896575] lowmem_reserve[]: 0 1751 1751 1751
[488054.896593] Node 0 DMA32 free:1884kB min:5332kB low:6664kB high:7996kB 
active_anon:358816kB inactive_anon:127152kB active_file:110804kB 
inactive_file:1106276kB unevictable:0kB isolated(anon):0kB isolated(file):0kB 
present:1793760kB mlocked:0kB dirty:120040kB writeback:0kB mapped:11892kB 
shmem:408kB slab_reclaimable:45940kB slab_unreclaimable:7252kB 
kernel_stack:1176kB pagetables:5096kB unstable:0kB bounce:0kB writeback_tmp:0kB 
pages_scanned:0 all_unreclaimable? no
[488054.896639] lowmem_reserve[]: 0 0 0 0
[488054.896656] Node 0 DMA: 3*4kB 1*8kB 2*16kB 1*32kB 4*64kB 2*128kB 3*256kB 
3*512kB 2*1024kB 1*2048kB 0*4096kB = 6996kB
[488054.896695] Node 0 DMA32: 19*4kB 2*8kB 0*16kB 0*32kB 0*64kB 4*128kB 1*256kB 
0*512kB 1*1024kB 0*2048kB 0*4096kB = 1884kB
[488054.896733] 306011 total pagecache pages
[488054.896741] 338 pages in swap cache
[488054.896748] Swap cache stats: add 1457, delete 1119, find 106250/106300
[488054.896757] Free swap  = 1044524kB
[488054.896764] Total swap = 1048568kB
[488054.900014] 458752 pages RAM
[488054.900014] 9907 pages reserved
[488054.900014] 303804 pages shared
[488054.900014] 151251 pages non-shared
[488054.900014] SLUB: Unable to allocate memory on node -1 (gfp=0x20)
[488054.900014]   cache: kmalloc-256, object size: 256, buffer size: 256, 
default order: 0, min order: 0
[488054.900014]   node 0: slabs: 62, objs: 992, free: 0


Any guidance much appreciated.

Regards,
Steve.

--
May the ping be with you ..


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.