|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH RFC] x86/time: avoid early uses of NOW() to return zero
Waiting loops like the one in flush_command_buffer() will degenerate to
infinite ones when used early enough for NOW() to still return constant
zero. Make sure the returned value at least monotonically increases.
Do this only in get_s_time(), as producing a sane value in
get_s_time_fixed() for non-zero inputs won't be reasonably possible.
Reported-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
RFC: This breaks at least the TSM_BOOT case printk_start_of_line(), which
checks for NOW() returning 0 (falling back to TSM_RAW in this case).
For now I have no idea how to avoid this, except that when CPUID leaf
0x15 is available we could leverage that to put in place at least an
approximate scale value. Doing so could, however, lead to a
discontinuity (returned value moving backwards) once the final scale
value was put in place. (Note, however, that such a discontinuity can
also result from init_percpu_time() using the BSP's scale value as
initial estimate for APs. Then again local_time_calibration() at
least makes an attempt at avoiding such.)
RFC: While generally the mentioned waiting loops will take longer to time
out, on a very fast CPU tight loops may time out too early.
RFC: In get_s_time_fixed(), should we perhaps assert that the scale was
set?
I don't think Fixes: tags should be put here. If we did, we'd have to
enumerate all introductions of early uses of NOW() (or get_s_time()), with
the exception of those dealing with getting back 0 (which I expect is only
printk_start_of_line()).
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1668,6 +1668,20 @@ s_time_t get_s_time_fixed(u64 at_tsc)
s_time_t get_s_time(void)
{
+ /*
+ * Before the TSC scale is set, avoid returning constant 0 (or whatever
+ * this_cpu(cpu_time).stamp.local_stime is set to). While the returned
+ * value is in no way representing time, it at least increases
+ * monotonically, thus avoiding e.g. waiting loops to degenerate to
+ * entirely infinite ones.
+ */
+ if ( unlikely(!this_cpu(cpu_time).tsc_scale.mul_frac) )
+ {
+ static s_time_t counter;
+
+ return arch_fetch_and_add(&counter, 1);
+ }
+
return get_s_time_fixed(0);
}
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |