[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Is: drivers/cpufreq/cpufreq-xen.c Was:Re: [PATCH 2 of 2] linux-xencommons: Load processor-passthru

.. snip..
>> Both of them (acpi-cpufreq.c and powernow-k8.c) have a symbol
>> dependency on drivers/acpi/processor.c
> But them being 'm' or 'y' shouldn't matter in the end.

I thought you were saying it matters - as it should be done around the
same time as cpufreq drivers were loaded?
.. snip..
>> For a), this would mean some form of unregistering the existing
>> cpufreq scaling drivers.The reason
> Or loading before them (and not depending on them), thus
> preventing them from loading successfully.

I think what you are suggesting is that to write a driver in drivers/cpufreq/
that gets either started before the other ones (if built-in) or if as
a module gets
loaded from xencommons. That driver would then make the call
to acpi_processor_preregister_performance(),
acpi_processor_register_performance() and acpi_processor_notify_smm().
It would function as a cpufreq-scaling driver but
in reality all calls to it from cpufreq gov-* drivers would end up with nop.

Dave, would you be Ok with a driver like that in your tree?
>> for that is we want to use the generic ones (acpi-cpufreq and
>> powernow-k8) b/c they do all the filtering and parsing of the ACPI
>> data instead of re-implementing it in our own cpufreq-xen-scaling.

I don't know what I was reading, but the filtering/parsing looks be
done via those
acpi_processor_* calls. So it sounds like it could be done that way.

>> Thought one other option is to export both powernow-k8 and
>> acpi-cpufreq functions that do this and use them within the
>> cpufreq-xen-scaling-driver but that sounds icky.
> Indeed.
>> 2). Upload the power management information up to the hypervisor.
> Which doesn't require cpufreq drivers at all (in non-pv-ops we simply
> suppress the CPU_FREQ config option when XEN is set).

> Jan

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.