[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 0/8] x86/time: improve cross-CPU clock monotonicity (and more)
1: x86/time: adjust local system time initialization 2: x86: also generate assembler usable equates for synthesized features 3: x86/time: introduce and use rdtsc_ordered() 4: x86/time: calibrate TSC against platform timer 5: x86/time: correctly honor late clearing of TSC related feature flags 6: x86/time: support 32-bit wide ACPI PM timer 7: x86/time: fold recurring code 8: x86/time: group time stamps into a structure The first patch was submitted on its own before; I'd like it to be looked at in the context of this entire series though, and hence I'm including it here again. Expectations towards the improvements of what is now patch 4 didn't get fulfilled (it brought down the skew only be a factor of two in my measurements), but to "level" that improvements by patch 3 were significantly larger than expected (hence the two got switched in order, albeit I can't say whether the improvements by 3 would also be as prominent without patch 4, as I simply never measured 3 alone): I've observed the TSC value read without the ordering fence to be up to several hundred clocks off compared to the fenced variant. I'd like to note though that even with the full series in place I'm still seeing some skew, and measurement over about 24h suggests that what is left now is actually increasing over time (which obviously is bad). What looks particularly odd to me is that over longer periods of time maximum skew between hyperthreads stays relatively stable on a certain core, but eventually another core may "overtake"). So far I did not have a good idea of what might be causing this (short of suspecting constant TSC not really being all that constant). Another observed oddity is that with this debugging code @@ -1259,6 +1327,14 @@ time_calibration_rendezvous_tail(const s c->local_tsc = rdtsc_ordered(); c->local_stime = get_s_time_fixed(c->local_tsc); c->master_stime = r->master_stime; +{//temp + s64 delta = rdtsc_ordered() - c->local_tsc; + static s64 delta_max = 100; + if(delta > delta_max) { + delta_max = delta; + printk("CPU%02u: delta=%ld\n", smp_processor_id(), delta); + } +} raise_softirq(TIME_CALIBRATE_SOFTIRQ); } added on top of the series deltas of beyond 20,000 get logged. Afaict (I admit I didn't check) interrupts are off here, so I can't see where such spikes would result from. Is a watchdog NMI a possible explanation for this? If so, what other bad effects on time calculations might result from such events? And how would one explain spikes of both around 13,000 and 22,000 with this (I'd expect individual watchdog NMIs to always take about the same time)? Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |