[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] credit accounting question
Credit scheduler uses a 10ms timer for vcpu accounting, and 30ms for system wide accounting. One puzzle from me is whether current accounting is a bit rough. One vcpu may only execute small quantum, say 1ms, and then scheduled out. However it's credit may be counted down by 100 due to 10ms timer expired in between. Then another vcpu may execute 18ms, but also with 100 credits substracted if, only one tick is hit. Will this result unfair credit assurance in some pattern? For example: cpu0 A: 75 (spin, under) -> current B: 75 (spin, under) C: 75 (spin, under) ---- D: 75 (io, under) -> block 'A' first execute 8ms, and then D is waken up: cpu0 D: 75 (io, under) -> current B: 75 (spin, under) C: 75 (spin, under) A: 75 (spin, under) -> credit is still 75 'D' execute 1ms, and then sleep again. Now B runs: cpu0 B: 75 (spin, under) -> current C: 75 (spin, under) A: 75 (spin, under) -> credit is still 75 ---- D: 75 (io, under) -> sleep 'B' execute 2ms, with csched_tick triggered in between. Then 'D' is waken up again: cpu0 D: 75 (io, under) -> current C: 75 (spin, under) A: 75 (spin, under) -> credit is still 75 B: -25 (spin, over) -> lower priority Then the net effect is in last accounting cycle (30ms), 'B' is put in a lower priority compared to other spin vcpus. Not sure whether this is an over-sensitive concern in real workload, since above is just one assumed scenario in my mind. Maybe in reality above transient unfairness will be fixed in a long run, from average P.O.V. Simply from design point of view, how much overhead may add to schedule phase if adding fine-grained accounting there? The accounting logic in csched_vcpu_acct seems simple enough. csched_cpu_pick may be still kept in this 10ms tick, or relax it to 30ms is also OK? Thanks, Kevin _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |