[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-ia64-devel] PATCH: xencomm [2]
Hi, Aron I think this issue do not relate xencomm and copy_from/to_user. This issue is caused by page_alloc of net/blkbk module. order:8 is too big. After forked and terminated some process, alloc_page() of oder:8 almost fail. So builtin module don't have this issue because none of process is created before initialization of these module. The solutions of this issue are: 1. fix memory allocation of blkbk, netbk 2. builtin blkbk, netbk 3. allocate big memory to dom0 See the following code. static int __init blkif_init(void) { struct page *page; int i; if (!is_running_on_xen()) return -ENODEV; mmap_pages = blkif_reqs * BLKIF_MAX_SEGMENTS_PER_REQUEST; <<<< too big page = balloon_alloc_empty_page_range(mmap_pages); <<<<< alloc_page() if (page == NULL) return -ENOMEM; mmap_vstart = (unsigned long)pfn_to_kaddr(page_to_pfn(page)); Best Regards, Akio Takebe >Hi Tristan, > >Tristan Gingold wrote: [Thu Sep 28 2006, 07:33:28AM EDT] >> My tests were booting dom0+2*dom_U (using netbk, blkbk and netloop as >> modules), and doing ping+wget from domU_1. > >I tested these patches on the Fedora rawhide kernel, but I still see >the memory allocation failure when trying to load the netbk or blkbk >modules in dom0. > >I thought that these patches would solve this problem, but the netbk >module isn't touched. Could you comment? > >Thanks, >Aron > >modprobe: page allocation failure. order:8, mode:0xd0 > >Call Trace: > [<a00000010001c8e0>] show_stack+0x40/0xa0 > sp=e00000001fdffc20 bsp=e00000001fdf9240 > [<a00000010001c970>] dump_stack+0x30/0x60 > sp=e00000001fdffdf0 bsp=e00000001fdf9228 > [<a00000010010b220>] __alloc_pages+0x500/0x540 > sp=e00000001fdffdf0 bsp=e00000001fdf91b0 > [<a00000010010b310>] __get_free_pages+0xb0/0x140 > sp=e00000001fdffe00 bsp=e00000001fdf9188 > [<a0000001003c7400>] balloon_alloc_empty_page_range+0x80/0x400 > sp=e00000001fdffe00 bsp=e00000001fdf9140 > [<a0000002000cc190>] netback_init+0x190/0x5c0 [netbk] > sp=e00000001fdffe30 bsp=e00000001fdf9108 > [<a0000001000d1dc0>] sys_init_module+0x180/0x400 > sp=e00000001fdffe30 bsp=e00000001fdf9098 > [<a000000100014100>] ia64_ret_from_syscall+0x0/0x40 > sp=e00000001fdffe30 bsp=e00000001fdf9098 > [<a000000000010620>] __start_ivt_text+0xffffffff00010620/0x400 > sp=e00000001fe00000 bsp=e00000001fdf9098 >Mem-info: >DMA per-cpu: >cpu 0 hot: high 42, batch 7 used:14 >cpu 0 cold: high 14, batch 3 used:13 >DMA32 per-cpu: empty >Normal per-cpu: empty >HighMem per-cpu: empty >Free pages: 22416kB (0kB HighMem) >Active:5579 inactive:9841 dirty:269 writeback:0 unstable:0 free:1401 slab: >1139 mapped:1143 pagetables:151 >DMA free:22416kB min:2896kB low:3616kB high:4336kB active:89264kB inactive: >157456kB present:524288kB pages_scanned:0 all_unreclaimable? no >lowmem_reserve[]: 0 0 0 0 >DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB > pages_scanned:0 all_unreclaimable? no >lowmem_reserve[]: 0 0 0 0 >Normal free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present: >0kB pages_scanned:0 all_unreclaimable? no >lowmem_reserve[]: 0 0 0 0 >HighMem free:0kB min:512kB low:512kB high:512kB active:0kB inactive:0kB >present:0kB pages_scanned:0 all_unreclaimable? no >lowmem_reserve[]: 0 0 0 0 >DMA: 169*16kB 96*32kB 50*64kB 21*128kB 8*256kB 5*512kB 4*1024kB 1*2048kB 0* >4096kB 0*8192kB 0*16384kB = 22416kB >DMA32: empty >Normal: empty >HighMem: empty >Swap cache: add 0, delete 0, find 0/0, race 0+0 >Free swap = 2031584kB >Total swap = 2031584kB >Free swap: 2031584kB >32768 pages of RAM >13277 reserved pages >5037 pages shared >0 pages swap cached >13 pages in page table cache > > >_______________________________________________ >Xen-ia64-devel mailing list >Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx >http://lists.xensource.com/xen-ia64-devel _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |