[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 10/11] x86/intel_pstate: support the use of intel_pstate in pmstat.c



On 15/06/2015 17:15, Jan Beulich wrote:
> >>> On 15.06.15 at 02:30, <wei.w.wang@xxxxxxxxx> wrote:
> > On 12/06/2015 19:14, Julien Grall wrote:
> >> On 11/06/2015 23:03, Wang, Wei W wrote:
> >> > On 11/06/2015 22:02, Julien Grall wrote:
> >> >> On 11/06/2015 04:31, Wei Wang wrote:
> >> >>> -    list_for_each(pos, &cpufreq_governor_list)
> >> >>> +    if (policy->policy)
> >> >>
> >> >> What if another cpufreq decides to use policy->policy?
> >> >
> >> > What is "another cpufreq"? The "policy" is per-CPU struct.
> >>
> >> I mean another cpufreq driver. Correct me if I'm wrong but from the
> >> name policy is not intel pstate specific. That means that a new
> >> cpufreq driver
> > can
> >> decide to use the field his own purpose..
> >
> > We actually want it be intel_pstate specific. If maintainers agree, I
> > think renaming it to intel_pstate_policy is a good option.
> 
> No, this name is just ugly. If you need driver specific data, have a void 
> pointer
> in the generic structure; the driver can then allocate memory to be pointed
> to by that, and can store there whatever private data it needs.

OK. I plan to make the following changes:

1) in cpufreq_policy, add a field - void *private_data;


2) add a new structure:
 struct intel_pstate_policy {
        unsigned int policy;
}

3) in intel_pstate_cpu_setup():
         struct intel_pstate_policy *private_policy = xzalloc(struct 
intel_pstate_policy);
         private_policy->policy = INTEL_PSTATE_POLICY_ONDEMAND;
         policy->private_data = private_policy;

4) in intel_pstate_cpu_exit():
         xfree(policy->private_data);

5)  Change all the "if (policy->policy)" to "if (cpufreq_driver->setpolicy)"


Best,
Wei






_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.