[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-ia64-devel] [PATCH] Use saner dom0 memory and vcpu defaults, don't panic on over-allocation
Jarod Wilson wrote: > Jarod Wilson wrote: >> Isaku Yamahata wrote: >>> On Tue, Jul 31, 2007 at 10:43:44PM -0600, Alex Williamson wrote: >>>>> + /* maximum available memory for dom0 */ >>>>> + max_dom0_pages = avail_domheap_pages() - >>>>> + min(avail_domheap_pages() / >>>>> + 16UL, 512UL << (20 - PAGE_SHIFT)) ; >>>> I assume this heuristic came from Akio's patch in the thread you >>>> referenced; can anyone explain how this was derived and why it's >>>> necessary? It looks like a fairly random fudge factor. Thanks, >>> I guess it comes from compute_dom0_nr_pages() under arch/x86. >>> However I don't know why compute_dom0_nr_pages() is so. >>> Anyway It should be different for ia64. While I'm guessing the most >>> dominant factor is the p2m table, domain0 building process should >>> be revised for the correct estimation. >> The version above does seem to work well for me on all the boxes I've >> tested it on, but I'm definitely all ears for how exactly to obtain a >> better calculation. I'm not familiar enough with the memory layout to >> easily come up with it myself, so anyone else has a suggestion there, >> please do speak up. > > Still reading over code, but throwing this idea out there... Would it > make sense to use efi_memmap_walk() to determine max_dom0_size? And if > so, should the size of the xenheap be subtracted from that? Rather than that approach, a simple 'max_dom0_pages = avail_domheap_pages()' is working just fine on both my 4G and 16G boxes, with the 4G box now getting ~260MB more memory for dom0 and the 16G box getting ~512MB more. Are there potential pitfalls here? I had a brief explanation for why the fudge factor on x86 was needed that now escapes me, but I think ia64 may be good to go without it... -- (XEN) System RAM: 4069MB (4166832kB) (XEN) size of virtual frame_table: 10256kB (XEN) virtual machine to physical table: f3fffffff7e00070 size: 2096kB (XEN) max_page: 0x103fff2 (XEN) allocating frame table/mpt table at mfn 0. (XEN) Xen heap: 60MB (61664kB) (XEN) Domain heap initialised: DMA width 32 bits (XEN) avail:0x1180c60000000000, status:0x60000000000,control:0x1180c00000000000, vm?0x0 (XEN) No VT feature supported. (XEN) cpu_init: current=f000000004118000 (XEN) vhpt_init: vhpt paddr=0x40febc0000, end=0x40febcffff (XEN) Using scheduler: SMP Credit Scheduler (credit) (XEN) Time init: (XEN) .... System Time: 1503261ns (XEN) .... scale: 11C71C71C (XEN) num_online_cpus=1, max_cpus=64 (XEN) Brought up 1 CPUs (XEN) xenoprof: using perfmon. (XEN) perfmon: version 2.0 IRQ 238 (XEN) perfmon: Itanium 2 PMU detected, 16 PMCs, 18 PMDs, 4 counters (47 bits) (XEN) Maximum number of domains: 63; 18 RID bits per domain (XEN) *** LOADING DOMAIN 0 *** (XEN) Max dom0 size: 3978MB (XEN) Reducing dom0 memory allocation from 4294967296 to 4172185600 to fit available memory -- Note that we've got a reported total of 4069MB up there, and a max dom0 size of 3978MB, so perhaps there's some further tweaking that could be done, but I think this looks quite reasonable. -- Jarod Wilson jwilson@xxxxxxxxxx Attachment:
signature.asc _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |