[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 1/5] x86/time: deal with negative deltas in get_s_time_fixed()
You did read my recurring explanation beyond the IPI sending, didn't you?
Of course IPI arrival may vary across cores / threads. Yet the term
"rendezvous" is used because CPUs having received the IPI are then held
in a waiting loop, until _all_ CPUs have made it there. Then CPU0
indicates to all of them simultaneously to move to the next step. There's
going to again be some variance (especially on NUMA, where the memory
write needs to propagate to all nodes), but at least within a single node
that should be pretty low. The main source of variance I would expect
there would by hyperthreads competing with one another in a single core. Yes, I saw it. I'm not trying to dispute your understanding of the situation. The difference may be small, but it adds up.
Sure, I don't doubt you make those observations. But we're still trying to
converge on a theory on what these may be caused by.
I can't tell you exactly what the main cause of the delay is. I can only list possible factors:1. Rounding error, which varies for each core;
2. Delay in IPI delivery speed (even at the hardware level, signal delivery shouldn't happen simultaneously, and some cores may have interrupts disabled);
3. CPU frequency boost, which allows some cores to execute code faster. On modern CPUs, this doesn't affect the TSC frequency, but the problem is that the counter will be read at different times.
4. The initial difference in TSC counter values, which for cores within a single CPU should be no more than 100 ns. In the case of NUMA, no more than 1000 ns;
I can't speak about percentages; I wasn't involved in CPU development, but there are many reasons not to allow cores to compete in sequence increment speed.
The log entries aren't in CPU order, and CPUs 1 and 2 actually have
identical values on the rhs. That doesn't quite fit what you have said so
far. CPU3's value is also lower than CPU0's. While CPU numbers happen to be in sequence here, the rhs values aren't equally
ordered.
Also really here it is
22915730869696 - 22915730869993 = -297 * 3 (3.00 GHz) = 891 ahead Similarly here. Yes, the gap increases, yet that's not a lag of CPU3 past
CPU0, but exactly the other way around. As per above - no, I don't think I can see that. Or maybe I'm misreading the
numbers as well as what you have been saying.
During the first few hours, the situation can be blurred due to possible race conditions. After two days, it becomes more or less clear:
254478162020920 (core 0) > 254478162018972 (core 1) > 254478162018429 (core 2) > 254478162017636 (core 3)
The lower the core number, the more it pulls ahead. It's possible that this is indeed related to which core is most heavily loaded (which one activates CPU boost more often). In my configuration, the first three cores are dedicated to DOM0, and the fourth is reserved for virtual machines. The first core ends up being the most heavily loaded due to interrupt handling, etc. I can also add that after replacing get_s_time_fixed with scale_delta, the difference stops accumulating. At this point, it's clear to me that the problem is the use of previous last_stime values. The nature of this CPU behavior is unlikely to be understood at the software level. Of course, all the processors I tested on have the constant_tsc, nonstop_tsc, and tsc_known_freq flags.
On 12.01.2026 17:41, Anton Markov wrote:
>>
>> That calls on_selected_cpus(), but send_IPI_mask() may then still resort to
>> all-but-self. In that case all IPIs are sent in one go.
>
> Plus as said, how IPIs are sent doesn't matter for the invocation of
>> time_calibration_rendezvous_tail(). They'll all run at the same time, not
>> one after the other.
>
> At the hardware level, no one can guarantee that the processors will
> simultaneously respond to the signal and execute your code nanosecond after
> you send the ipi. Especially when we're talking about NUMA configurations. I'm
> afraid the possible and impossible in the laws of physics is also beyond
> the scope of this thread.
You did read my recurring explanation beyond the IPI sending, didn't you?
Of course IPI arrival may vary across cores / threads. Yet the term
"rendezvous" is used because CPUs having received the IPI are then held
in a waiting loop, until _all_ CPUs have made it there. Then CPU0
indicates to all of them simultaneously to move to the next step. There's
going to again be some variance (especially on NUMA, where the memory
write needs to propagate to all nodes), but at least within a single node
that should be pretty low. The main source of variance I would expect
there would by hyperthreads competing with one another in a single core.
> Since further down you build upon that "IPI lag", I fear we first need to
>> settle on this aspect of your theory.
>
> I've already provided the delay logs. It's not hard for me to repeat.
Sure, I don't doubt you make those observations. But we're still trying to
converge on a theory on what these may be caused by.
> 2 hours of work:
>
>> (XEN) update stime on time calibrate 0, 8564145820102 -> 8565145861597
>> (8565145862216, 0)
>> (XEN) update stime on time calibrate 1, 8564145820129 -> 8565145861609
>> (8565145863957, 0)
>> (XEN) update stime on time calibrate 3, 8564145819996 -> 8565145861491
>> (8565145864800, 0)
>> (XEN) update stime on time calibrate 2, 8564145820099 -> 8565145861609
>> (8565145865372, 0)
>>
>> 8565145861609 - 8565145861491 = 115 * 3 (3.00 GHz) = 345 lag
The log entries aren't in CPU order, and CPUs 1 and 2 actually have
identical values on the rhs. That doesn't quite fit what you have said so
far. CPU3's value is also lower than CPU0's.
> 3 hours of work:
>
>> (XEN) update stime on time calibrate 0, 22914730829200 -> 22915730869993
>> (22915730870665, 0)
>> (XEN) update stime on time calibrate 1, 22914730829073 -> 22915730869889
>> (22915730870693, 0)
>> (XEN) update stime on time calibrate 2, 22914730829052 -> 22915730869841
>> (22915730872231, 0)
>> (XEN) update stime on time calibrate 3, 22914730828892 -> 22915730869696
>> (22915730872096, 0)
>>
>> 22915730869993 - 22915730869696 = 297 * 3 (3.00 GHz) = 891 lag
While CPU numbers happen to be in sequence here, the rhs values aren't equally
ordered.
Also really here it is
22915730869696 - 22915730869993 = -297 * 3 (3.00 GHz) = 891 ahead
> 2-3 day of work:
>
>> (XEN) update stime on time calibrate 0, 254477161980127 -> 254478162020920
>> (254478162021549, 0)
>> (XEN) update stime on time calibrate 2, 254477161977638 -> 254478162018429
>> (254478162022187, 0)
>> (XEN) update stime on time calibrate 1, 254477161978192 -> 254478162018972
>> (254478162022776, 0)
>> (XEN) update stime on time calibrate 3, 254477161976832 -> 254478162017636
>> (254478162021394, 0)
>>
>> 254478162020920 - 254478162017636 = 3284 * 3 (3.00 GHz) = 9852 lag
Similarly here. Yes, the gap increases, yet that's not a lag of CPU3 past
CPU0, but exactly the other way around.
> As you can see, the core lag is strictly determined by their sequence
> number.
As per above - no, I don't think I can see that. Or maybe I'm misreading the
numbers as well as what you have been saying.
Jan
|