[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 4/5] xen: Try hard to get max_cpus value



This patch improves max_cpus value calculation. First
xen_hyper_get_cpu_info() tries to get it from nr_cpu_ids
variable. If nr_cpu_ids does not exist then it attempts
to read max_cpus variable. If it is not available estimate
max_cpus value on the base of cpumask_t size.

Signed-off-by: Daniel Kiper <daniel.kiper@xxxxxxxxxx>

diff -Npru crash-6.1.0.orig/xen_hyper.c crash-6.1.0/xen_hyper.c
--- crash-6.1.0.orig/xen_hyper.c        2012-11-13 15:02:34.000000000 +0100
+++ crash-6.1.0/xen_hyper.c     2012-11-13 15:07:40.000000000 +0100
@@ -1883,13 +1883,28 @@ xen_hyper_get_uptime_hyper(void)
 void
 xen_hyper_get_cpu_info(void)
 {
-       ulong addr;
+       ulong addr, init_begin, init_end;
        ulong *cpumask;
        uint *cpu_idx;
        int i, j, cpus;
 
        XEN_HYPER_STRUCT_SIZE_INIT(cpumask_t, "cpumask_t");
-       xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
+
+       if (symbol_exists("nr_cpu_ids"))
+               get_symbol_data("nr_cpu_ids", sizeof(uint), &xht->max_cpus);
+       else {
+               init_begin = symbol_value("__init_begin");
+               init_end = symbol_value("__init_end");
+               addr = symbol_value("max_cpus");
+
+               if (addr >= init_begin && addr < init_end)
+                       xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
+               else {
+                       get_symbol_data("max_cpus", sizeof(xht->max_cpus), 
&xht->max_cpus);
+                       if (XEN_HYPER_SIZE(cpumask_t) * 8 > xht->max_cpus)
+                               xht->max_cpus = XEN_HYPER_SIZE(cpumask_t) * 8;
+               }
+       }
 
        if (xht->cpumask) {
                free(xht->cpumask);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.