[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH v7 11/13] tools/cpufreq: extract CPPC para from cpufreq para


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: "Penny, Zheng" <penny.zheng@xxxxxxx>
  • Date: Tue, 26 Aug 2025 08:21:54 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=px3ujermkuIgj3WrZnBLkXS1fkDkxnPp7/NdwTK64XQ=; b=idc5ETh4nLjoah54MP7RJwAsivxsZu1zSgHs3hlSDqRo6tiBXajMgBcwpC16e4lXWjQH87+WQRHtVNSVZuPCdrEg0MPJzQqKn9sMzh+tRmLqT9A0iXoRbNnuHog2nzbS/LJRzhYU1gB/GxgntKcRZtqoXty046jiYQMf75dnzaNHA5JHghNnNZsrtU9o0oOJ7I5iXXyC0DiPu/pkQiGEOmkV1neBqTauFzf0WDZe/PtDk7v6SLMobx+Dz2p05TJux2VCEgyS208XJNzHg2fdDlFifbclDTmUk9Dcq8JKqU4LoUu/rCf6t0Es+nKj4T0yUtO23M3nj2253nCuXdr7iA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=gJLgEG9G68SG74JO0yuF8PVgGcQwIv4PBGtiiwELG8EI/Uzctk0CJdby6PreElLMp0uaG0rWb5WxlX1BbJjXdE0s4XsBR/N5+ltuHkpeME5TveNyP6RPY1J5+hquETQUVkUf4iFK4ly09KMbJwDKnrXrCBQdIgx9iTf0PlgjyMCyD0JWwxjclg88O1Kb3WH5LwJuwCmC/DCGRpSTFFpUifS1PxKKV8JGuiC+NPR6b/uXaywtee+aa0bshHAGSWHCX5upxQwygqTCD1uRhpCkMJBQw7zKzprneY37NVKzdJtuuK+ibmgNkjCfa/6cun1yN8XxBd/DFIWC6fYg2dSGUA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: "Huang, Ray" <Ray.Huang@xxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, "Orzel, Michal" <Michal.Orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Tue, 26 Aug 2025 08:22:08 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Msip_labels: MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Enabled=True;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SiteId=3dd8961f-e488-4e60-8e11-a82d994e183d;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_SetDate=2025-08-26T08:21:46.0000000Z;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Name=Open Source;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_ContentBits=3;MSIP_Label_f265efc6-e181-49d6-80f4-fae95cf838a0_Method=Privileged
  • Thread-index: AQHcE1LyQ0lYyaNtv02FRdV+NHpcFrRzhX0AgAESjyA=
  • Thread-topic: [PATCH v7 11/13] tools/cpufreq: extract CPPC para from cpufreq para

[Public]

> -----Original Message-----
> From: Jan Beulich <jbeulich@xxxxxxxx>
> Sent: Monday, August 25, 2025 11:37 PM
> To: Penny, Zheng <penny.zheng@xxxxxxx>
> Cc: Huang, Ray <Ray.Huang@xxxxxxx>; Anthony PERARD
> <anthony.perard@xxxxxxxxxx>; Juergen Gross <jgross@xxxxxxxx>; Andrew
> Cooper <andrew.cooper3@xxxxxxxxxx>; Orzel, Michal <Michal.Orzel@xxxxxxx>;
> Julien Grall <julien@xxxxxxx>; Roger Pau Monné <roger.pau@xxxxxxxxxx>; Stefano
> Stabellini <sstabellini@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [PATCH v7 11/13] tools/cpufreq: extract CPPC para from cpufreq 
> para
>
> On 22.08.2025 12:52, Penny Zheng wrote:
> > We extract cppc info from "struct xen_get_cpufreq_para", where it acts
> > as a member of union, and share the space with governor info.
> > However, it may fail in amd-cppc passive mode, in which governor info
> > and CPPC info could co-exist, and both need to be printed together via xenpm
> tool.
> > If we tried to still put it in "struct xen_get_cpufreq_para" (e.g.
> > just move out of union), "struct xen_get_cpufreq_para" will enlarge
> > too much to further make xen_sysctl.u exceed 128 bytes.
> >
> > So we introduce a new sub-field GET_CPUFREQ_CPPC to dedicatedly
> > acquire CPPC-related para, and make get-cpufreq-para invoke
> > GET_CPUFREQ_CPPC if available.
> > New helpers print_cppc_para() and get_cpufreq_cppc() are introduced to
> > extract CPPC-related parameters process from cpufreq para.
> >
> > Signed-off-by: Penny Zheng <Penny.Zheng@xxxxxxx>
>
> Acked-by: Jan Beulich <jbeulich@xxxxxxxx> # hypervisor
>

Thx

> > --- a/tools/libs/ctrl/xc_pm.c
> > +++ b/tools/libs/ctrl/xc_pm.c
> > @@ -288,7 +288,6 @@ int xc_get_cpufreq_para(xc_interface *xch, int cpuid,
> >          CHK_FIELD(s.scaling_min_freq);
> >          CHK_FIELD(s.u.userspace);
> >          CHK_FIELD(s.u.ondemand);
> > -        CHK_FIELD(cppc_para);
> >
> >  #undef CHK_FIELD
>
> What is done here is already less than what could be done; I think ...
>

Emm, maybe because we define two different cpufreq para structures for user 
space and sysctl, struct xc_get_cpufreq_para and struct xen_get_cppc_para.
But for cppc para, it is an alias:
typedef struct xen_get_cppc_para xc_cppc_para_t;
So ...

> > @@ -366,6 +365,33 @@ int xc_set_cpufreq_cppc(xc_interface *xch, int cpuid,
> >      return ret;
> >  }
> >
> > +int xc_get_cppc_para(xc_interface *xch, unsigned int cpuid,
> > +                     xc_cppc_para_t *cppc_para) {
> > +    int ret;
> > +    struct xen_sysctl sysctl = {};
> > +    struct xen_get_cppc_para *sys_cppc_para =
> > +&sysctl.u.pm_op.u.get_cppc;
> > +
> > +    if ( !xch  || !cppc_para )
> > +    {
> > +        errno = EINVAL;
> > +        return -1;
> > +    }
> > +
> > +    sysctl.cmd = XEN_SYSCTL_pm_op;
> > +    sysctl.u.pm_op.cmd = GET_CPUFREQ_CPPC;
> > +    sysctl.u.pm_op.cpuid = cpuid;
> > +
> > +    ret = xc_sysctl(xch, &sysctl);
> > +    if ( ret )
> > +        return ret;
> > +
> > +    BUILD_BUG_ON(sizeof(*cppc_para) != sizeof(*sys_cppc_para));

... maybe whole structure size checking is enough?

> > +    memcpy(cppc_para, sys_cppc_para, sizeof(*sys_cppc_para));
>
> ... you minimally want to apply as much checking here.
>
> Jan

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.