[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/hpet: Improve handling of timer_deadline



>>> On 31.05.17 at 16:14, <andrew.cooper3@xxxxxxxxxx> wrote:
> Update hpet_broadcast_{enter,exit}() to use this_cpu() rather than per_cpu()
> for clarity,

I'm afraid this makes things worse in other respects (see below).

> @@ -697,8 +696,9 @@ void hpet_broadcast_enter(void)
>  {
>      unsigned int cpu = smp_processor_id();
>      struct hpet_event_channel *ch = per_cpu(cpu_bc_channel, cpu);
> +    s_time_t deadline = this_cpu(timer_deadline);

There's a this_cpu() now side by side with a per_cpu(, cpu). Please
let's not do this, and since per_cpu() produces better code when
there are multiple in a single function, I'd prefer if we stayed with
that.

> -    if ( per_cpu(timer_deadline, cpu) == 0 )
> +    if ( deadline == 0 )
>          return;
>  
>      if ( !ch )
> @@ -715,8 +715,8 @@ void hpet_broadcast_enter(void)
>  
>      spin_lock(&ch->lock);
>      /* reprogram if current cpu expire time is nearer */
> -    if ( per_cpu(timer_deadline, cpu) < ch->next_event )
> -        reprogram_hpet_evt_channel(ch, per_cpu(timer_deadline, cpu), NOW(), 
> 1);
> +    if ( deadline < ch->next_event )
> +        reprogram_hpet_evt_channel(ch, deadline, NOW(), 1);
>      spin_unlock(&ch->lock);
>  }

So you re-use a previously read value after potentially waiting
quite a while for the lock to become available. The fact that the
variable is being updated on the local CPU only is not immediately
visible here, so I think the comment wants expanding.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.