[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v2] x86/hpet: Improve handling of timer_deadline



timer_deadline is only ever updated via this_cpu() in timer_softirq_action(),
so is not going to change behind the back of the currently running cpu.

Update hpet_broadcast_{enter,exit}() to cache the value in a local variable to
avoid the repeated RELOC_HIDE() penalty.

handle_hpet_broadcast() reads the timer_deadlines of remote cpus, but there is
no need to force the read for cpus which are not present in the mask.  One
requirement is that we only sample the value once (which happens as a side
effect of RELOC_HIDE()), but is made more explicit with ACCESS_ONCE().

Bloat-o-meter shows a modest improvement:

  add/remove: 0/0 grow/shrink: 0/3 up/down: 0/-144 (-144)
  function                                     old     new   delta
  hpet_broadcast_exit                          335     313     -22
  hpet_broadcast_enter                         327     278     -49
  handle_hpet_broadcast                        572     499     -73

Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
CC: Jan Beulich <JBeulich@xxxxxxxx>

v2:
 * Switch back to per_cpu(timer_deadline, cpu);
 * Add a comment explaining why deadline is still valid to use.
---
 xen/arch/x86/hpet.c | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hpet.c b/xen/arch/x86/hpet.c
index 46f4c42..29ad365 100644
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -189,12 +189,11 @@ static void handle_hpet_broadcast(struct 
hpet_event_channel *ch)
     {
         s_time_t deadline;
 
-        rmb();
-        deadline = per_cpu(timer_deadline, cpu);
-        rmb();
         if ( !cpumask_test_cpu(cpu, ch->cpumask) )
             continue;
 
+        deadline = ACCESS_ONCE(per_cpu(timer_deadline, cpu));
+
         if ( deadline <= now )
             __cpumask_set_cpu(cpu, &mask);
         else if ( deadline < next_event )
@@ -697,8 +696,9 @@ void hpet_broadcast_enter(void)
 {
     unsigned int cpu = smp_processor_id();
     struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
+    s_time_t deadline = per_cpu(timer_deadline, cpu);
 
-    if ( per_cpu(timer_deadline, cpu) == 0 )
+    if ( deadline == 0 )
         return;
 
     if ( !ch )
@@ -714,9 +714,12 @@ void hpet_broadcast_enter(void)
     cpumask_set_cpu(cpu, ch->cpumask);
 
     spin_lock(&ch->lock);
-    /* reprogram if current cpu expire time is nearer */
-    if ( per_cpu(timer_deadline, cpu) < ch->next_event )
-        reprogram_hpet_evt_channel(ch, per_cpu(timer_deadline, cpu), NOW(), 1);
+    /*
+     * reprogram if current cpu expire time is nearer.  deadline is never
+     * written by a remote cpu, so the value read earlier is still valid.
+     */
+    if ( deadline < ch->next_event )
+        reprogram_hpet_evt_channel(ch, deadline, NOW(), 1);
     spin_unlock(&ch->lock);
 }
 
@@ -724,8 +727,9 @@ void hpet_broadcast_exit(void)
 {
     unsigned int cpu = smp_processor_id();
     struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
+    s_time_t deadline = per_cpu(timer_deadline, cpu);
 
-    if ( per_cpu(timer_deadline, cpu) == 0 )
+    if ( deadline == 0 )
         return;
 
     if ( !ch )
@@ -733,7 +737,7 @@ void hpet_broadcast_exit(void)
 
     /* Reprogram the deadline; trigger timer work now if it has passed. */
     enable_APIC_timer();
-    if ( !reprogram_timer(per_cpu(timer_deadline, cpu)) )
+    if ( !reprogram_timer(deadline) )
         raise_softirq(TIMER_SOFTIRQ);
 
     cpumask_clear_cpu(cpu, ch->cpumask);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.