[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re: Scheduling groups, credit scheduler support
On 04/12/07 10:38 +0000, Keir Fraser wrote: On 29/11/07 20:19, "Mike D. Day" <ncmike@xxxxxxxxxx> wrote:The credit implementation is limited to the sharing time slices among group members. All members of a group must share the master's time slices with the master. If the group master is capped at 20%, the cumulative total of the master and all members will be 20%. This is specifically to support stub domains, which will host the device model for hvm domains. The proper way to schedule a stub domain is for it to share the times slices allocated the the stub's hvm domain.It's good to see these patches get aired again. I hope we can get some integration with the ongoing stub domain work and get some numbers out to prove better scalability and QoS. This would ease the passage of scheduler group support into xen-unstable. I previously published some benchmarks: http://article.gmane.org/gmane.comp.emulators.xen.devel/39818/ The credit scheduler runs vcpus, not domains. However, a domain's vcpus are given time slices according to the credits available to the domain and any caps placed on the domain. Therefore, the simplest way to group domains together in the credit scheduler is to assign the member domain's vcpus to the master domain. Each vcpu assigned to the master domain receives a credit equal to the master domain's total credit divided by the number of assigned vcpus. This forces all the member domains to share the master domain's credits with the master, which achieves the desired behavior.Is this the right thing to do? I would think that the desired behaviour is for each 'master vcpu' to freely share its credit allocation with its 'buddy vcpus'. The static N-way split of credits across vcpus within a domain makes some kind of sense, since the vcpus are each equally important and eachindependent of each other. This is what happens with the patch today. In fact, the code that allocates credits is untouched by the patch. The difference is that the active vcpus of the stub domain are transfered for accounting purposes to the hvm domain. So whenever the stub domain runs those credits are decremented from the hvm domain. If the stub domain doesn't run no hvm credits are decremented. Statically splitting credits between e.g., HVM guest domain and its stub domain makes less sense. One is subordinate to the other, and a model where the stub can 'steal' credits dynamically from the HVM domain seems to make more sense. Otherwise, wouldn't a uniprocessor HVM guest get half its credit stolen by the uniprocessor stub domain, even if the HVM guest is doing no I/O? Credits are "stolen" only when the stub domain runs, so if the hvm domain is doing no I/O then none of its credits go to the stub domain. Mike -- Mike D. Day IBM LTC Cell: 919 412-3900 Sametime: ncmike@xxxxxxxxxx AIM: ncmikeday Yahoo: ultra.runner PGP key: http://www.ncultra.org/ncmike/pubkey.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |