|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [XEN PATCH v2 2/2] x86/cpufreq: separate powernow/hwp/acpi cpufreq code
On 01.07.2024 14:19, Sergiy Kibrik wrote:
> --- a/xen/drivers/acpi/pmstat.c
> +++ b/xen/drivers/acpi/pmstat.c
> @@ -255,7 +255,7 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op *op)
> strlcpy(op->u.get_para.scaling_driver, "Unknown", CPUFREQ_NAME_LEN);
>
> if ( !strncmp(op->u.get_para.scaling_driver, XEN_HWP_DRIVER_NAME,
> - CPUFREQ_NAME_LEN) )
> + CPUFREQ_NAME_LEN) && IS_ENABLED(CONFIG_INTEL) )
Wrapping like this is confusing, not just because of the flawed indentation.
Please can this be
if ( !strncmp(op->u.get_para.scaling_driver, XEN_HWP_DRIVER_NAME,
CPUFREQ_NAME_LEN) &&
IS_ENABLED(CONFIG_INTEL) )
? Perhaps the IS_ENABLED() would also better be first (not just here).
> --- a/xen/drivers/cpufreq/utility.c
> +++ b/xen/drivers/cpufreq/utility.c
> @@ -379,7 +379,7 @@ int cpufreq_driver_getavg(unsigned int cpu, unsigned int
> flag)
> if (!cpu_online(cpu) || !(policy = per_cpu(cpufreq_cpu_policy, cpu)))
> return 0;
>
> - freq_avg = get_measured_perf(cpu, flag);
> + freq_avg = IS_ENABLED(CONFIG_INTEL) ? get_measured_perf(cpu, flag) : 0;
> if ( freq_avg > 0 )
> return freq_avg;
Why is this? APERF/MPERF aren't Intel-only MSRs.
> --- a/xen/include/acpi/cpufreq/cpufreq.h
> +++ b/xen/include/acpi/cpufreq/cpufreq.h
> @@ -254,11 +254,20 @@ void intel_feature_detect(struct cpufreq_policy
> *policy);
>
> int hwp_cmdline_parse(const char *s, const char *e);
> int hwp_register_driver(void);
> +#ifdef CONFIG_INTEL
> bool hwp_active(void);
> +#else
> +static inline bool hwp_active(void)
> +{
> + return false;
> +}
> +#endif
> +
> int get_hwp_para(unsigned int cpu,
> struct xen_cppc_para *cppc_para);
> int set_hwp_para(struct cpufreq_policy *policy,
> struct xen_set_cppc_para *set_cppc);
>
> int acpi_register_driver(void);
> +
> #endif /* __XEN_CPUFREQ_PM_H__ */
Nit: This adding of a blank line should be part of the earlier patch.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |