[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V4 5/8] tools/libxl: Set 'reg' of cpu node equal to MPIDR affinity for domU



Hi Chen,

On 28/05/15 11:15, Chen Baozi wrote:
> From: Chen Baozi <baozich@xxxxxxxxx>
> 
> According to ARM CPUs bindings, the reg field should match the MPIDR's
> affinity bits. We will use AFF0 and AFF1 when constructing the reg value
> of the guest at the moment, for it is enough for the current max vcpu
> number.
> 
> Signed-off-by: Chen Baozi <baozich@xxxxxxxxx>
> ---
>  tools/libxl/libxl_arm.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index c5088c4..8aa4815 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -272,6 +272,7 @@ static int make_cpus_node(libxl__gc *gc, void *fdt, int 
> nr_cpus,
>                            const struct arch_info *ainfo)
>  {
>      int res, i;
> +    uint64_t mpidr_aff;
>  
>      res = fdt_begin_node(fdt, "cpus");
>      if (res) return res;
> @@ -283,7 +284,16 @@ static int make_cpus_node(libxl__gc *gc, void *fdt, int 
> nr_cpus,
>      if (res) return res;
>  
>      for (i = 0; i < nr_cpus; i++) {
> -        const char *name = GCSPRINTF("cpu@%d", i);
> +        const char *name;
> +
> +        /*
> +         * According to ARM CPUs bindings, the reg field should match
> +         * the MPIDR's affinity bits. We will use AFF0 and AFF1 when
> +         * constructing the reg value of the guest at the moment, for it
> +         * is enough for the current max vcpu number.
> +         */
> +        mpidr_aff = (uint64_t)((i & 0x0f) | (((i >> 4) & 0xff) << 8));

The (uint64_t) is not necessary.

> +        name = GCSPRINTF("cpu@%lx", mpidr_aff);

It's not necessary to change the cpu@.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.