[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] cpuidle causing Dom0 soft lockups
>>> "Tian, Kevin" <kevin.tian@xxxxxxxxx> 09.02.10 08:55 >>> >This stat would be helpful, given that you can get actual singleshot >timer trace instead of just speculating from dom0 inside. Finally got to that. While without the patch the number of single-shot events is less than that of periodic ones, it is much bigger with. Both kernel (relative to extrapolated local time) and hypervisor (relative to extrapolated system time) agree that many of them get scheduled with a time (up to about 6ms) in the past. Which means to me that the per-CPU processed_system_time isn't being maintained accurately, and I think this is a result of how it is being updated in timer_interrupt(): Other than with the global processed_system_time, the per-CPU one may not get increased even if delta_cpu was close to 3*NS_PER_TICK, due to the stolen/blocked accounting. Probably this was not a problem so far because no code outside of timer_interrupt() read this value - Keir, since I think you wrote that logic originally, any word on this? While I think this should be changed (so that the value can be used in stop_hz_timer() for a second adjustment similar to the one done with jiffies, setting the timeout to last timer interrupt + 1 tick), I meanwhile thought of a different approach to the original problem: Instead of dedicating a CPU to have the duty of maintaining global time, just limit the number of CPUs that may concurrently try to acquire xtime_lock in timer_interrupt(). While for now I set the threshold to __fls(nr_cpu_ids), even setting it to 1 gives pretty good results - no unduly high interrupt rates anymore, and only very infrequent (one every half hour or so) single-shot events that are determined to have a timeout more than 1ms in the past. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |