|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 0/8] Display IO topology when PXM data is available (plus some cleanup)
>>> On 23.03.15 at 14:47, <boris.ostrovsky@xxxxxxxxxx> wrote:
> How about this (only x86 compile-tested). And perhaps, while at it, fix
> types for cpu_core_id and phys_proc_id.
>
> diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
> index c73dfc9..b319be7 100644
> --- a/xen/common/sysctl.c
> +++ b/xen/common/sysctl.c
> @@ -354,10 +354,10 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t)
> u_sysctl)
> if ( cpu_present(i) )
> {
> cputopo.core = cpu_to_core(i);
> - if ( cputopo.core == BAD_APICID )
> + if ( cputopo.core == INVALID_COREID )
> cputopo.core = XEN_INVALID_CORE_ID;
> cputopo.socket = cpu_to_socket(i);
> - if ( cputopo.socket == BAD_APICID )
> + if ( cputopo.socket == INVALID_SOCKETID )
> cputopo.socket = XEN_INVALID_SOCKET_ID;
Why not use XEN_INVALID_CORE_ID / XEN_INVALID_SOCKET_ID
for those return values right away?
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -214,8 +214,19 @@ extern void detect_extended_topology(struct cpuinfo_x86
> *c);
>
> extern void detect_ht(struct cpuinfo_x86 *c);
>
> -#define cpu_to_core(_cpu) (cpu_data[_cpu].cpu_core_id)
> -#define cpu_to_socket(_cpu) (cpu_data[_cpu].phys_proc_id)
> +inline int cpu_to_core(unsigned cpu)
> +{
> + if ( cpu_data[cpu].cpu_core_id == BAD_APICID )
> + return INVALID_COREID;
> + return cpu_data[cpu].cpu_core_id;
> +}
> +
> +inline int cpu_to_socket(unsigned cpu)
> +{
> + if ( cpu_data[cpu].phys_proc_id == BAD_APICID )
> + return INVALID_SOCKETID;
> + return cpu_data[cpu].phys_proc_id;
> +}
Apart from them needing to be static, I don't think we want the
extra conditionals in x86 code. Hence I think you rather should
introduce wrappers for the specific us in sysctl.c.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |