[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2 1/2] linux/vnuma: vNUMA for PV domu guest



On mar, 2013-09-17 at 04:34 -0400, Elena Ufimtseva wrote:
> Requests NUMA topology info from Xen by issuing subop
> hypercall. 
>
Not such a big deal, but I think you can call it just an hypercall, and
perhaps put it the name (or something like "XENMEM_get_vnuma_info" via
"HYPERVISOR_memory_op()" ). What I'm after is that 'subop hcall' may be
too much of a Xen-ism.

> Initializes NUMA nodes, sets number of CPUs,
> distance table and NUMA nodes memory ranges during boot.
> vNUMA topology defined by user in VM config file. 
>
Here (I mean in the changelog of this patch) I'd just say that the vNUMA
topology is hosted in Xen. You can add a quick note about from where it
come from but, (1), it pretty much always comes from the config file
right now, but that may change in the future (when we introduce Dom0
vNUMA); (2) I don't think Linux cares much from where Xen has gotten the
information. :-P

> Memory
> ranges are represented by structure vnuma_topology_info
> where start and end of memory area are defined in guests
> pfns numbers, constructed and aligned accordingly to
> e820 domain map.
>
Fine, and what does that mean for Linux? The fact that all is already
aligned and respectful of the e820, I mena? I actually have a question
about this, which I ask below, but whatever the answer is, I think a
summary of that should go up here.

Oh, and BTW, as it is in the code, some blank lines separating a bit
more the various paragraphs would be nice in the  changelogs too.

> Signed-off-by: Elena Ufimtseva <ufimtseva@xxxxxxxxx>
> ---
>
> diff --git a/arch/x86/include/asm/xen/vnuma.h 
> b/arch/x86/include/asm/xen/vnuma.h
>
> diff --git a/arch/x86/xen/vnuma.c b/arch/x86/xen/vnuma.c
> new file mode 100644
> index 0000000..3c6c73f
> --- /dev/null
> +++ b/arch/x86/xen/vnuma.c
> @@ -0,0 +1,92 @@
> +#include <linux/err.h>
> +#include <linux/memblock.h>
> +#include <xen/interface/xen.h>
> +#include <xen/interface/memory.h>
> +#include <asm/xen/interface.h>
> +#include <asm/xen/hypercall.h>
> +#include <asm/xen/vnuma.h>
> +#ifdef CONFIG_NUMA
> +/* Xen PV NUMA topology initialization */
> +static unsigned int xen_vnuma_init = 0;
> +int xen_vnuma_support()
> +{
> +     return xen_vnuma_init;
> +}
> +int __init xen_numa_init(void)
> +{
[snip]
> +     /*
> +      * NUMA nodes memory ranges are in pfns, constructed and
> +      * aligned based on e820 ram domain map
> +     */
> +     for (i = 0; i < numa_topo.nr_nodes; i++) {
> +             if (numa_add_memblk(i, varea[i].start, varea[i].end))
> +                     /* pass to numa_dummy_init */
> +                     goto vnumaout;
> +             node_set(i, numa_nodes_parsed);
> +     }
>
So, forgive me if my e820-fu is not black-belt level, here you go
through nr_nodes elements of the varea[] array, and setup nr_nodes
memblks, i.e., one for each node.

I take this like there only can be one memblk per node, which, foor
instance, means no holes, is that accurate?

If yes, the "constructed and aligned based on e820" means basically just
alignment, right?

More important, are we sure this is always ok, i.e., that we don't and
won't ever need to take holes into account?

> diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
> index 2ecfe4f..4237f51 100644
> --- a/include/xen/interface/memory.h
> +++ b/include/xen/interface/memory.h
> @@ -263,4 +263,31 @@ struct xen_remove_from_physmap {
>
> +struct vnuma_topology_info {
> +     /* OUT */
> +     domid_t domid;
> +     /* IN */
> +     uint16_t nr_nodes; /* number of virtual numa nodes */
> +     uint32_t _pad;
> +     /* distance table */
> +     GUEST_HANDLE(uint) vdistance;
> +     /* cpu mapping to vnodes */
> +     GUEST_HANDLE(uint) cpu_to_node;
> +     /*
> +     * array of numa memory areas constructed by Xen
> +     * where start and end are pfn numbers of the area
> +     * Xen takes into account domains e820 map
                                   ^domain's ?
> +     */
> +     GUEST_HANDLE(vnuma_memarea) vmemarea;
> +};
> +DEFINE_GUEST_HANDLE_STRUCT(vnuma_topology_info);
> +
> +#define XENMEM_get_vnuma_info        25
> +
>  #endif /* __XEN_PUBLIC_MEMORY_H__ */

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.