|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] xen/arm: avoid extra caclulations when setting vtimer in context switch
Hi Jiami Title: s/caclulations/calculations/However, I think the title should mention the overflow rather than the extra calculations. The former is more important the latter. On 27/06/2022 03:58, Jiamei Xie wrote: virt_vtimer_save is calculating the new time for the vtimer in: "v->arch.virt_timer.cval + v->domain->arch.virt_timer_base.offset - boot_count". In this formula, "cval + offset" might cause uint64_t overflow. Changing it to "v->domain->arch.virt_timer_base.offset - boot_count + v->arch.virt_timer.cval" can reduce the possibility of overflow This read strange to me. We want to remove the overflow completely not reducing it. The overflow is completely removed by converting the "offset - bount_count" to ns upfront. AFAICT, the commit message doesn't explain that. , and "arch.virt_timer_base.offset - boot_count" will be always the same, which has been caculated in domain_vtimer_init. Introduce a new field vtimer_offset.nanoseconds to store this value for arm in struct arch_domain, so we can use it directly and extra caclulations can be avoided. This patch is enlightened from [1]. Signed-off-by: Jiamei Xie <jiamei.xie@xxxxxxx> [1] https://www.mail-archive.com/xen-devel@xxxxxxxxxxxxxxxxxxxx/msg123139.htm This link doesn't work. But I would personally remove it from the commit message (or add ---) because it doesn't bring value (this patch looks like a v2 to me).
This should be s_time_t to match the argument of set_timer() and return of ticks_to_ns(). + } vtimer_offset; Why are you adding a new structure rather than re-using virt_timer_base?
Hmmm... I find odd to assign a field "nanoseconds" to "seconds". I would suggest to re-order so you first set nanoseconds and then set seconds. This will make more obvious that this is not a mistake and "seconds" will be closer to the do_div() below.
Cheers, -- Julien Grall
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |