[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Performance difference between Xen versions


  • To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
  • Date: Fri, 06 May 2011 15:49:13 +0200
  • Cc: Keir Fraser <keir.xen@xxxxxxxxx>, Keir Fraser <keir@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>
  • Delivery-date: Fri, 06 May 2011 06:50:14 -0700
  • Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=sPgvCtb6V4xTMSTnVEERRWOwkM4fe9mazg4/tmyAy2Ln/HlU3GUtTsXN KD00KhDiIsMsMojFeTEjqewAFYlSYfdPPrj31UNUNC9IDq+eduAB3wRPl ILn06JvmdW6Tgp7ViWskhaxApaH4XThJ1Z0zccyQO3+05QNuDQlScWRCQ hfWpoeDEum99D+wQpwOGohBiyB08GjD4uBjx7eTq0F4Q/xEOtau+opWye Ig1PAISkO1D0HHzMfoVUZ+cg/H6R4;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On 05/03/11 05:06, Tian, Kevin wrote:
From: Keir Fraser
Sent: Monday, May 02, 2011 4:49 PM

On 02/05/2011 09:23, "Juergen Gross"<juergen.gross@xxxxxxxxxxxxxx>  wrote:

On 05/02/11 10:15, Jan Beulich wrote:
On 02.05.11 at 10:00, Juergen Gross<juergen.gross@xxxxxxxxxxxxxx>
wrote:
On the long run I'd like to make the cpufreq governor a feature of
the cpupool. This would enable an administrator of a large Xen
machine with a heterogeneous load to specify which domains should
run at full speed and which are allowed to save energy at the cost of
latency.
What do you think?
Certainly an interesting idea, with the question of how an
implementation of this would look like.
Let me do some research work first :-) I hope to make a proposal soon.
I think it's a good idea, and it should be quite possible to implement cleanly.

yes, this is a good direction. Actually there have been several papers around 
this topic
before. Basically it's a reasonable choice to inject higher level knowledge 
together
with VMM heuristics, as in virtualization or cloud we usually have an 
intelligent stack
which needs to understand many high level requirements/characteristics already. 
:-)

Okay, I think I understand the basic mechanisms of cpufreq stuff now :-)
I propose the following changes:

- Cpupools get a new parameter "cpufreq" which is similar to the hypervisor
  boot parameter. It is valid if the hypervisor is responsible for cpufreq
  handling (this excludes cases cpufreq=none and cpufreq=dom0-kernel)
- Cpupool0 is initialized with the boot parameter settings, new cpupools are
  created with the cpupool0 settings, they get their new cpufreq parameters
via libxl later (this avoids changing the interface for cpupool creation, I only
  need a new interface to set the cpufreq parameters for a cpupool, which
  can be used for changing the settings, too. This interface could take the
  cpufreq parameters as text string resulting in support of exactly the same
  parameters as the hypervisor).
- cpufreq_policy is only spanning multiple cpus of one cpupool (if at all). This
  requires a check for the max frequency to be set in a frequency domain
  if the frequency of a processor is changing. This is similar to the ondemand
  governor, but might cross cpufreq_policy boundaries.

Did I miss anything? Any other suggestions?


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.