[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 07/14 RESEND] cpufreq: Export HWP parameters to userspace
On 10.05.2023 19:49, Jason Andryuk wrote: > On Mon, May 8, 2023 at 6:26 AM Jan Beulich <jbeulich@xxxxxxxx> wrote: >> >> On 01.05.2023 21:30, Jason Andryuk wrote: >>> Extend xen_get_cpufreq_para to return hwp parameters. These match the >>> hardware rather closely. >>> >>> We need the features bitmask to indicated fields supported by the actual >>> hardware. >>> >>> The use of uint8_t parameters matches the hardware size. uint32_t >>> entries grows the sysctl_t past the build assertion in setup.c. The >>> uint8_t ranges are supported across multiple generations, so hopefully >>> they won't change. >> >> Still it feels a little odd for values to be this narrow. Aiui the >> scaling_governor[] and scaling_{max,min}_freq fields aren't (really) >> used by HWP. So you could widen the union in struct >> xen_get_cpufreq_para (in a binary but not necessarily source compatible >> manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly >> placed scaling_cur_freq could be included as well ... > > The values are narrow, but they match the hardware. It works for HWP, > so there is no need to change at this time AFAICT. > > Do you want me to make this change? Well, much depends on what these 8-bit values actually express (I did raise this question in one of the replies to your patches, as I wasn't able to find anything in the SDM). That'll then hopefully allow to make some educated prediction on on how likely it is that a future variant of hwp would want to widen them. (Was it energy_perf that went from 4 to 8 bits at some point, which you even comment upon in the public header?) > On Mon, May 8, 2023 at 6:47 AM Jan Beulich <jbeulich@xxxxxxxx> wrote: >> On 08.05.2023 12:25, Jan Beulich wrote: >>> On 01.05.2023 21:30, Jason Andryuk wrote: >>>> Extend xen_get_cpufreq_para to return hwp parameters. These match the >>>> hardware rather closely. >>>> >>>> We need the features bitmask to indicated fields supported by the actual >>>> hardware. >>>> >>>> The use of uint8_t parameters matches the hardware size. uint32_t >>>> entries grows the sysctl_t past the build assertion in setup.c. The >>>> uint8_t ranges are supported across multiple generations, so hopefully >>>> they won't change. >>> >>> Still it feels a little odd for values to be this narrow. Aiui the >>> scaling_governor[] and scaling_{max,min}_freq fields aren't (really) >>> used by HWP. So you could widen the union in struct >>> xen_get_cpufreq_para (in a binary but not necessarily source compatible >>> manner), gaining you 6 more uint32_t slots. Possibly the somewhat oddly >>> placed scaling_cur_freq could be included as well ... >> >> Having seen patch 9 now as well, I wonder whether here (or in a separate >> patch) you don't want to limit providing inapplicable data (for example >> not filling *scaling_available_governors would even avoid an allocation, >> thus removing a possible reason for failure), while there (or again in a >> separate patch) you'd also limit what the tool reports (inapplicable >> output causes confusion / questions at best). > > The xenpm output only shows relevant information: > > # xenpm get-cpufreq-para 11 > cpu id : 11 > affected_cpus : 11 > cpuinfo frequency : base [1600000] max [4900000] > scaling_driver : hwp-cpufreq > scaling_avail_gov : hwp > current_governor : hwp > hwp variables : > hardware limits : lowest [1] most_efficient [11] > : guaranteed [11] highest [49] > configured limits : min [1] max [255] energy_perf [128] > : activity_window [0 hardware selected] > : desired [0 hw autonomous] > turbo mode : enabled > > The scaling_*_freq values, policy->{min,max,cur} are filled in with > base, max and 0 in hwp_get_cpu_speeds(), so it's not totally invalid > values being returned. The governor registration restricting to only > internal governors when HWP is active means it only has the single > governor. I think it's okay as-is, but let me know if you want > something changed. Well, the main connection here was to the possible overloading of space in the sysctl struct, by widening what the union covers. That can of course only be done for fields which don't convey useful data. If we go without the further overloading, I guess we can for now leave the tool as you have it, and deal with possible tidying later on. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |