[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] scheduler rate controller



> 
> Hey Hui,  Sorry for the delay in response -- FYI I'm at the XenSummit Korea
> now, and I'll be on holiday next week.
> 

Have a good trip in Korea and the following holiday! And Say Hi to everyone 
thereã ïï

> I'm attaching a prototype minimum timeslice patch that I threw together last
> week.  It currently hangs during boot, but it will give you the idea of what I
> was thinking of.
> 
> Hui, can you let me know what you think of the idea, and if you find it
> interesting, could you try to fix it up, and test it?  Testing it with bigger 
> values
> like 5ms would be really interesting.

I agree that this idea seems more natural and proper, if it can solve the two 
problems that I addressed above. We need data to prove/disprove it.
As you mentioned that, this method is supposed to have the similar result as 
the patch I sent, when setting 10ms as the delay value in the excessive 
condition.
So that, an idea came to me that may enforce your proposal,
1. we still count the scheduling number during each period (for example 10ms)
2. This scheduling number is used to adaptively decide the delay value.
For example, if scheduling number is very excessive, we can set longer delay 
time, such as 5ms or 10ms. Or if the scheduling number is small, we can set 
small delay time, such as 1ms, 500us or even zero. In this way, the delay value 
is decided adaptively.
I think It can solve the possible problems that I addressed above.
George, how do you think this?
I'd like to try this and see the result. May also to compare the results 
between different solutions. As you know, SPECvirt workloads is too complex 
that I need some time to produce this :).
Also we have a set of small workloads to make quick testing.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.