[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 01/19] xen: dump vNUMA information with debug key "u"



On 13/01/15 20:51, Wei Liu wrote:
>
>>> +
>>> +        vnuma = d->vnuma;
>>> +        printk("     %u vnodes, %u vcpus:\n", vnuma->nr_vnodes, 
>>> d->max_vcpus);
>>> +        for ( i = 0; i < vnuma->nr_vnodes; i++ )
>>> +        {
>>> +            err = snprintf(keyhandler_scratch, 12, "%3u",
>>> +                    vnuma->vnode_to_pnode[i]);
>>> +            if ( err < 0 || vnuma->vnode_to_pnode[i] == NUMA_NO_NODE )
>>> +                strlcpy(keyhandler_scratch, "???", 3);
>>> +
>>> +            printk("       vnode  %3u - pnode %s\n", i, 
>>> keyhandler_scratch);
>>> +            for ( j = 0; j < vnuma->nr_vmemranges; j++ )
>>> +            {
>>> +                if ( vnuma->vmemrange[j].nid == i )
>>> +                {
>>> +                    printk("         %016"PRIx64" - %016"PRIx64"\n",
>>> +                           vnuma->vmemrange[j].start,
>>> +                           vnuma->vmemrange[j].end);
>>> +                }
>>> +            }
>>> +
>>> +            printk("       vcpus: ");
>>> +            for ( j = 0, n = 0; j < d->max_vcpus; j++ )
>>> +            {
>>> +                if ( !(j & 0x3f) )
>>> +                    process_pending_softirqs();
>>> +
>>> +                if ( vnuma->vcpu_to_vnode[j] == i )
>>> +                {
>>> +                    if ( (n + 1) % 8 == 0 )
>>> +                        printk("%3d\n", j);
>>> +                    else if ( !(n % 8) && n != 0 )
>>> +                        printk("%17d ", j);
>>> +                    else
>>> +                        printk("%3d ", j);
>>> +                    n++;
>>> +                }
>> Do you have a sample of what this looks like for a VM with more than 8
>> vcpus ?
>>
> (XEN) Memory location of each domain:
> (XEN) Domain 0 (total: 131072):
> (XEN)     Node 0: 131072
> (XEN) Domain 6 (total: 768064):
> (XEN)     Node 0: 768064
> (XEN)      2 vnodes, 20 vcpus:
> (XEN)        vnode    0 - pnode   0
> (XEN)          0000000000000000 - 000000005dc00000
> (XEN)        vcpus:   0   1   2   3   4   5   6   7
> (XEN)                 8   9
> (XEN)        vnode    1 - pnode   0
> (XEN)          000000005dc00000 - 00000000bb000000
> (XEN)          0000000100000000 - 0000000100800000
> (XEN)        vcpus:  10  11  12  13  14  15  16  17
> (XEN)                18  19

This is very verbose, and (IMO) moderately difficult to parse even
knowing that the information is.

May I suggest the following sylistic changes:

2 vnodes, 20 vcpus:
  1: pnode 0, vcpus {1-9}
    0000000000000000 - 000000005dc00000
  2: pnode 1, vcpus {10-20}
    000000005dc00000 - 00000000bb000000
    0000000100000000 - 0000000100800000
 
You have already stated 2 vnodes, so "vnode $X" is redundant as the list
index.  The vcpus are exceedingly likely to be consecutively allocated,
and cpumask_scnprintf() is a very concise way of representing them (and
will reduce your code quite a bit).

I might even be tempted to suggest adding something like "guest physical
layout" to the "2 vnodes, 20 vcpus" line, to make explicitly clear that
the memory ranges are gfns and not mfns/something else.  After all, this
debugkey is mixing host and guest memory attributes.

~Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.