[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Xen credit scheduler question



Hi all (and Mr. Dunlap in particular),

 

I have a question about the credit (and ultimately credit2) scheduler that I hope you can help me with.

 

I have read the white paper “Scheduler development update” and as much material on the credit scheduler as I can find, but I am still not completely clear on how I should think about the cap.

 

Example scenario:

 

  • Server hardware: 2 sockets, 8-cores per socket, 2 hardware threads per core (total of 32 hardware threads)
  • Test VM: a single virtual machine with a single vCPU, weight=256 and cap=100%

 

In this scenario, from what I understand, I should be able to load the Test VM with traffic to a maximum of approximately 1/32 of the aggregate compute capacity of the server.  The total CPU utilization of the server hardware should be approximately 3.4%, plus the overhead of dom0 (say 1-2).  The credits available to any vCPU capped at 100% should be equal to 1/32 of the aggregate compute available for the whole server, correct?

  

Put simply, is there a way to constrain a VM with 1 vCPU to consume no more than 0.5 of a physical core (hyper-threaded) on the server hardware mentioned below? Does the cap help in that respect?

 

I have been struggling to understand how the scheduler can deal with the uncertainty that hyperthreading introduces, however.  I know this is an issue that you are tackling in the credit2 scheduler, but I would like to know what your thoughts are on this problem (if you are able to share).  Any insight or assistance you could offer would be greatly appreciated. 

 

Thanks very much and best regards,

 

- Mike


Michael Palmeter | Sr. Director of Product Management, Oracle
Oracle Development
200 Oracle Parkway | Redwood Shores, California 94065

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.