[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC][PATCH] scheduler: credit scheduler for client virtualization
On Wed, Dec 3, 2008 at 9:16 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote: > Don't hack it into the existing sched_credit.c unless you are really sharing > significant amounts of stuff (which it looks like you aren't?). > sched_bcredit.c would be a cleaner name if there's no sharing. Is a new > scheduler necessary -- could the existing credit scheduler be generalised > with your boost mechanism to be suitable for both client and server? I think we ought to be able to work this out; the functionality doesn't sound that different, and as you say, keeping two schedulers around is only an invitation to bitrot. The more accurate credit scheduling and vcpu credit "balancing" seem like good ideas. For the other changes, it's probably worth measuring on a battery of tests to see what kinds of effects we get, especially on network throughput. Nishiguchi-san, (I hope that's right!) as I understood from your presentation, you haven't tested this on a server workload, but you predict that the "boost" scheduling of 2ms will cause unnecessary overhead for server workloads. Is that correct? Couldn't we avoid the overhead this way: If a vcpu has 5 or more "boost" credits, we simply set the next-timer to 10ms. If the vcpu yields before then, we subtract the amount of "boost" credits actually used. If not, we subtract 5. That way we're not interrupting any more frequently than we were before. Come to think of it: won't the effect of setting the 'boost' time to 2ms be basically counteracted by giving domains boost credits? I thought the purpose reducing the boost time was to allow other domains to run more quickly? But if a domain has more than 5 'boost' credits, it will run for a full 10 ms anyway. Is that not so? Could you test your video latency measurement with all the other optimizations, but with the "boost" time set to 10ms instead of 2? If it works well, it's probably worth simply merging the bulk of your changes in and testing with server workloads. -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |