[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Scheduler "Burst" (was Re: [Xen-devel] domU config file options for scheduling)



I don't quite understand the problem here. Does that mean current 'cap'
implemented in credit scheduler is not a real cap, and some VM may
still go over specified cap to grab cpu for unexpected time, and then
Thomas wants to add another 'burst-cap' to limit that abused behavior?
If that the case, we should fix current implementation.

Simply back to requirement, I agree with George that second-based
restriction can be easily done in daemon.

Thanks,
Kevin

>From: George Dunlap
>Sent: 2009年12月11日 2:28
>
>[Starting a separate thread for a separate topic]
>
>Regarding the "burst" feature, as I said at the Summit, I think as
>described it's too specific for your use case.  I'm sure it would be
>useful for you, but how many hosting providers would want to use that
>exact interface?  Maybe they'd like a similar feature but with slight
>changes; maybe they'd like to solve the same problem in a completely
>different way.
>
>If your "burst" time is on the order of seconds, it seems to me that a
>demon running in dom0, monitoring VM cpu usage every second or so, and
>adjusting the cap / weight could have the same overall effect using
>the existing interfaces.  Having these kinds of policies in a demon is
>a lot more flexible, as the demon can be stopped / modified /
>rewritten to the particular policy someone wants without having to
>change Xen at all.
>
> -George
>
>On Thu, Dec 10, 2009 at 7:58 AM, Thomas Goirand 
><thomas@xxxxxxxxxx> wrote:
>> Also, I had a chat with George about having a kind of "burst" feature
>> added to the credit scheduler. My idea is that, by default, someone
>> would setup a credit-burst-time, and a credit-burst-cap, to 
>any VM that
>> has a a credit-cap set. That could be set this way in the 
>domU config file:
>>
>> shedcreditweight = 20
>> shedcreditcap = 50
>> shedbursttime = 5000 <--- 5 seconds max burst CPU time
>> shedburstcap = 80 <--- 80% CPU max during the burst
>> shedrecovertime = 2000 <--- 2 second recovery time
>>
>> Whenever a VM goes over its shedcreditcap, the burst-cap 
>would be used
>> for a time no longer than the credit-burst-time. When this 
>time is over,
>> then shedcreditcap would be used, until the VM uses less than
>> shedcreditcap for a time longer than shedrecovertime.
>>
>> I do believe that this kind of scheduling would be REALLY useful for
>> avoiding that a VM abuses the CPU usage. For people doing 
>Xen VM hosting
>> business, or cloud computing, that would be just great. The 
>values that
>> I wrote above would be a typical use.
>>
>> I had a look myself in the scheduler code, and it's quite not obvious
>> where to patch. I think I have understood the burncredit() 
>system, but I
>> didn't get where the values are read for the cap and all. Did I
>> understood well that Xen uses 30ms cycles for the credit 
>scheduler? If
>> that is the case, then I believe that computing what cap 
>values to use
>> depending on the burst parameters for a given 30ms scheduling cycle
>> wouldn't be such a big overhead.
>>
>> If nobody wants to implement this, can someone gives me some 
>pointers in
>> the sched_credit.c and all the .h structures? If I do stupid 
>trials and
>> errors in my implementation at first, because I'd be a beginner at
>> patching Xen, would you guys have enough time to explain, 
>and excuse a
>> newbie? Would that kind of patch be accepted if proven to work?
>>
>> Cheers,
>>
>> Thomas Goirand
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.