|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 1/5] x86/time: deal with negative deltas in get_s_time_fixed()
On 12.01.2026 17:41, Anton Markov wrote: >> >> That calls on_selected_cpus(), but send_IPI_mask() may then still resort to >> all-but-self. In that case all IPIs are sent in one go. > > Plus as said, how IPIs are sent doesn't matter for the invocation of >> time_calibration_rendezvous_tail(). They'll all run at the same time, not >> one after the other. > > At the hardware level, no one can guarantee that the processors will > simultaneously respond to the signal and execute your code nanosecond after > you send the ipi. Especially when we're talking about NUMA configurations. I'm > afraid the possible and impossible in the laws of physics is also beyond > the scope of this thread. You did read my recurring explanation beyond the IPI sending, didn't you? Of course IPI arrival may vary across cores / threads. Yet the term "rendezvous" is used because CPUs having received the IPI are then held in a waiting loop, until _all_ CPUs have made it there. Then CPU0 indicates to all of them simultaneously to move to the next step. There's going to again be some variance (especially on NUMA, where the memory write needs to propagate to all nodes), but at least within a single node that should be pretty low. The main source of variance I would expect there would by hyperthreads competing with one another in a single core. > Since further down you build upon that "IPI lag", I fear we first need to >> settle on this aspect of your theory. > > I've already provided the delay logs. It's not hard for me to repeat. Sure, I don't doubt you make those observations. But we're still trying to converge on a theory on what these may be caused by. > 2 hours of work: > >> (XEN) update stime on time calibrate 0, 8564145820102 -> 8565145861597 >> (8565145862216, 0) >> (XEN) update stime on time calibrate 1, 8564145820129 -> 8565145861609 >> (8565145863957, 0) >> (XEN) update stime on time calibrate 3, 8564145819996 -> 8565145861491 >> (8565145864800, 0) >> (XEN) update stime on time calibrate 2, 8564145820099 -> 8565145861609 >> (8565145865372, 0) >> >> 8565145861609 - 8565145861491 = 115 * 3 (3.00 GHz) = 345 lag The log entries aren't in CPU order, and CPUs 1 and 2 actually have identical values on the rhs. That doesn't quite fit what you have said so far. CPU3's value is also lower than CPU0's. > 3 hours of work: > >> (XEN) update stime on time calibrate 0, 22914730829200 -> 22915730869993 >> (22915730870665, 0) >> (XEN) update stime on time calibrate 1, 22914730829073 -> 22915730869889 >> (22915730870693, 0) >> (XEN) update stime on time calibrate 2, 22914730829052 -> 22915730869841 >> (22915730872231, 0) >> (XEN) update stime on time calibrate 3, 22914730828892 -> 22915730869696 >> (22915730872096, 0) >> >> 22915730869993 - 22915730869696 = 297 * 3 (3.00 GHz) = 891 lag While CPU numbers happen to be in sequence here, the rhs values aren't equally ordered. Also really here it is 22915730869696 - 22915730869993 = -297 * 3 (3.00 GHz) = 891 ahead > 2-3 day of work: > >> (XEN) update stime on time calibrate 0, 254477161980127 -> 254478162020920 >> (254478162021549, 0) >> (XEN) update stime on time calibrate 2, 254477161977638 -> 254478162018429 >> (254478162022187, 0) >> (XEN) update stime on time calibrate 1, 254477161978192 -> 254478162018972 >> (254478162022776, 0) >> (XEN) update stime on time calibrate 3, 254477161976832 -> 254478162017636 >> (254478162021394, 0) >> >> 254478162020920 - 254478162017636 = 3284 * 3 (3.00 GHz) = 9852 lag Similarly here. Yes, the gap increases, yet that's not a lag of CPU3 past CPU0, but exactly the other way around. > As you can see, the core lag is strictly determined by their sequence > number. As per above - no, I don't think I can see that. Or maybe I'm misreading the numbers as well as what you have been saying. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |