[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] [PATCH]Add free memory size of every NUMA node in phsical info
Thanks for your advice. As far as I know, xc_availheap will return specific node's free memory, in this context we need all nodes' info, so under the most conditions if use xc_availheap, there may be 4~8 times hypercall, one time per node. So in my methods, I allocate more memory to let one hpercall to return a list instead of a value. It is something related with time complexity and space complexity. This info has the same attribute with "cpu_to_node" in physical info. So I think it is better to get them using the same methods. Thanks again. >-----Original Message----- >From: Daniel P. Berrange [mailto:berrange@xxxxxxxxxx] >Sent: Tuesday, February 26, 2008 11:02 AM >To: Duan, Ronghui >Cc: xen-devel@xxxxxxxxxxxxxxxxxxx >Subject: Re: [Xen-devel] [PATCH]Add free memory size of every NUMA node in >phsical info > >On Tue, Feb 26, 2008 at 10:46:17AM +0800, Duan, Ronghui wrote: >> I see that, the reason I don't use that function is there need one more >> time hypercall, I only reuse the function which have been realized in >> hypervisor. Thanks for your advice. > >The performance impact of doing 1 extra hypercall for 'availheap' is >completely irrelevant in this context. The time for 1 extra hypercall >is dwarfed (by several orders of magnitude) by the overhead due to >'xm' and 'xend' being in python & using XML-RPC. We should just use >the existing hypercall & not worry about time overhead we'll never >notice. > >Dan. >-- >|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 - >=| >|=- Perl modules: http://search.cpan.org/~danberr/ - >=| >|=- Projects: http://freshmeat.net/~danielpb/ - >=| >|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 - >=| _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |