|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/3] xen: timers: don't miss a timer event because of stop_timer()
>>> On 15.09.17 at 20:01, <dario.faggioli@xxxxxxxxxx> wrote:
> --- a/xen/common/timer.c
> +++ b/xen/common/timer.c
> @@ -217,7 +217,7 @@ static inline void activate_timer(struct timer *timer)
> timer->status = TIMER_STATUS_invalid;
> list_del(&timer->inactive);
>
> - if ( add_entry(timer) )
> + if ( add_entry(timer) || timer->expires <= NOW() )
> cpu_raise_softirq(timer->cpu, TIMER_SOFTIRQ);
> }
I'm not convinced of this change - it's unrelated to what title and
description say, and I'm not sure a timer being activated in a way
possibly clashing with its expiry is actually intended to fire.
> @@ -326,7 +326,17 @@ void stop_timer(struct timer *timer)
> return;
>
> if ( active_timer(timer) )
> - deactivate_timer(timer);
> + {
> + /*
> + * If the timer is expired already, 'call' the softirq handler to
> + * execute it (it will leave it inactive after that). If not, just
> + * deactivate it.
> + */
> + if ( timer->expires <= NOW() )
> + cpu_raise_softirq(timer->cpu, TIMER_SOFTIRQ);
> + else
> + deactivate_timer(timer);
> + }
Isn't it reasonable to expect current behavior, i.e. the timer not
raising any events other than those already in progress?
Additionally I'm not convinced the changed actually does what you
want: What if NOW() becomes equal to timer->expires immediately
after you've managed to obtain its value, before execution makes
it into deactivate_timer()? IOW you're shrinking a time window which
(I think) you really mean to eliminate.
Furthermore, taking both changes together: What if the same
you try to address in stop_timer() happens in set_timer()? Wouldn't
it then be only consistent to also require a timer even to fire for the
old expiry value, before the new one is being set?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |