[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v7 13/13] xen/cpufreq: Adapt SET/GET_CPUFREQ_CPPC xen_sysctl_pm_op for amd-cppc driver


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: "Penny, Zheng" <penny.zheng@xxxxxxx>
  • Date: Thu, 28 Aug 2025 06:54:51 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=V5b2RcW5DPXDHKuYhkUTFxeAdka2rwDX+xN0phHlQ8c=; b=OTgSh7w6hyz8fjtS9M6Ea12P0ZzdVxgEmVU8inBtWmjyn1yljk8BRms95ik6txmPc2RIdWDu45yXgm0DfswE5sLxorFfyUakd0+sRoCg/TLkDMiwhGXmZmIp8r4/a85OstIBS0GAlIVDf3g8wb/MKOJ5pqJbwcoVQ+1ggJS112GgEKz1ylqKh/12gvZQR5qlAdm651gBHLmBsE+3Ssb5asDYqQ6M3Lr2emCieBYNrbxfZARXb0We8gDFPFC8wtP5MFeqDpemGTt4d3dUCv2jHJuNqRWLTEn5+pZimhvbQE7vc8mRVcBRYh9vn+iBT4n7DBL+M7SaJt0jdGhNRNuidA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=o3hHZyj9wmVtUJzhYvYOpg6Rm6yok5lp4RBYMYH+SmqYJn0xua6iu4tE2SWA+z2pB5+UFXyJvs1JgmVVvOsq38ZcYg+/4jOn9MnO6KGRrqB/D0O0lec4He3EhWRCPRO/WwAC5YCuxUOvQRMLAV7Am0yRqF2/cYBn8MggTXS8B22fR/IyJZj/uJWmPBeuOdZ3n6JGNq1alhVJc2qpHBo9mfcxiq7BwJvFRwzeHr22eZTh4+S597kfxKVhIo0Q3mupdIkR4nEvViWn8Uf4MaQkzXt4Gy7x1EwnIAWr6MxkNIkDdQTSmIJBSccRf9ZBsC8hCkGdZc/TYPPrTwIOAWuNkg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: "Huang, Ray" <Ray.Huang@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, "Orzel, Michal" <Michal.Orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 28 Aug 2025 06:55:13 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Msip_labels: MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Enabled=True;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SetDate=2025-08-28T06:54:36.0000000Z;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Name=Open Source;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_ContentBits=3;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Method=Privileged
  • Thread-index: AQHcE1L0WO8OIJho60KGV8JFR/QspLRzjMCAgAJSvzCAAcXZgIAAAHmAgAACEFA=
  • Thread-topic: [PATCH v7 13/13] xen/cpufreq: Adapt SET/GET_CPUFREQ_CPPC xen_sysctl_pm_op for amd-cppc driver

[Public]

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: Thursday, August 28, 2025 2:38 PM
> To: Penny, Zheng <penny.zheng@xxxxxxx>
> Cc: Huang, Ray <Ray.Huang@xxxxxxx>; Anthony PERARD
> <anthony.perard@xxxxxxxxxx>; Andrew Cooper <andrew.cooper3@xxxxxxxxxx>;
> Orzel, Michal <Michal.Orzel@xxxxxxx>; Julien Grall <julien@xxxxxxx>; Roger Pau
> Monné <roger.pau@xxxxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; 
> xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [PATCH v7 13/13] xen/cpufreq: Adapt SET/GET_CPUFREQ_CPPC
> xen_sysctl_pm_op for amd-cppc driver
>
> On 28.08.2025 08:35, Jan Beulich wrote:
> > On 28.08.2025 06:06, Penny, Zheng wrote:
> >>> -----Original Message-----
> >>> From: Jan Beulich <jbeulich@xxxxxxxx>
> >>> Sent: Tuesday, August 26, 2025 12:03 AM
> >>>
> >>> On 22.08.2025 12:52, Penny Zheng wrote:
> >>>> --- a/xen/include/public/sysctl.h
> >>>> +++ b/xen/include/public/sysctl.h
> >>>> @@ -336,8 +336,14 @@ struct xen_ondemand {
> >>>>      uint32_t up_threshold;
> >>>>  };
> >>>>
> >>>> +#define CPUFREQ_POLICY_UNKNOWN      0
> >>>> +#define CPUFREQ_POLICY_POWERSAVE    1
> >>>> +#define CPUFREQ_POLICY_PERFORMANCE  2
> >>>> +#define CPUFREQ_POLICY_ONDEMAND     3
> >>>
> >>> Without XEN_ prefixes they shouldn't appear in a public header. But
> >>> do we need ...
> >>>
> >>>>  struct xen_get_cppc_para {
> >>>>      /* OUT */
> >>>> +    uint32_t policy; /* CPUFREQ_POLICY_xxx */
> >>>
> >>> ... the new field at all? Can't you synthesize the kind-of-governor
> >>> into struct xen_get_cpufreq_para's respective field? You invoke both
> >>> sub-ops from xenpm now anyway ...
> >>>
> >>
> >> Maybe I could borrow governor field to indicate policy info, like the 
> >> following in
> print_cpufreq_para(), then we don't need to add the new filed "policy"
> >> ```
> >> +    /* Translate governor info to policy info in CPPC active mode */
> >> +    if ( is_cppc_active )
> >> +    {
> >> +        if ( !strncmp(p_cpufreq->u.s.scaling_governor,
> >> +                      "ondemand", CPUFREQ_NAME_LEN) )
> >> +            printf("cppc policy           : ondemand\n");
> >> +        else if ( !strncmp(p_cpufreq->u.s.scaling_governor,
> >> +                           "performance", CPUFREQ_NAME_LEN) )
> >> +            printf("cppc policy           : performance\n");
> >> +
> >> +        else if ( !strncmp(p_cpufreq->u.s.scaling_governor,
> >> +                           "powersave", CPUFREQ_NAME_LEN) )
> >> +            printf("cppc policy           : powersave\n");
> >> +        else
> >> +            printf("cppc policy           : unknown\n");
> >> +    }
> >> +
> >> ```
> >
> > Something like this is what I was thinking of, yes.
>
> Albeit - why the complicated if/else sequence? Why not simply print the field 
> the
> hypercall returned?
>

userspace governor doesn't have according policy. I could simplify it to
```
        if ( !strncmp(p_cpufreq->u.s.scaling_governor,
             "userspace", CPUFREQ_NAME_LEN) )
                printf("policy               : unknown\n");
        else
                printf("policy               : %s\n",
                          p_cpufreq->u.s.scaling_governor);
```


> Jan

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.