[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 08/15] xen/riscv: introduce vtimer_set_timer() and vtimer_expired()




On 1/8/26 11:28 AM, Jan Beulich wrote:
On 24.12.2025 18:03, Oleksii Kurochko wrote:
--- a/xen/arch/riscv/include/asm/vtimer.h
+++ b/xen/arch/riscv/include/asm/vtimer.h
@@ -22,4 +22,6 @@ void vcpu_timer_destroy(struct vcpu *v);
int domain_vtimer_init(struct domain *d, struct xen_arch_domainconfig *config); +void vtimer_set_timer(struct vtimer *t, const uint64_t ticks);
+
  #endif /* ASM__RISCV__VTIMER_H */
diff --git a/xen/arch/riscv/vtimer.c b/xen/arch/riscv/vtimer.c
index 5ba533690bc2..99a0c5986f1d 100644
--- a/xen/arch/riscv/vtimer.c
+++ b/xen/arch/riscv/vtimer.c
@@ -1,6 +1,8 @@
  /* SPDX-License-Identifier: GPL-2.0-only */
+#include <xen/domain.h>
Is this really needed, when ...

  #include <xen/sched.h>
... this is already there?

With the way how includes look in xen/sched.h - no.


+#include <xen/time.h>
Don't you mean xen/timer.h here?

You are right, it should be xen/timer.h as set_timer(), stop_timer() and 
migrate_timer()
are from xen/timer.h.


@@ -15,7 +17,9 @@ int domain_vtimer_init(struct domain *d, struct 
xen_arch_domainconfig *config)
static void vtimer_expired(void *data)
  {
-    panic("%s: TBD\n", __func__);
+    struct vtimer *t = data;
Pointer-to-const please.

@@ -37,3 +41,27 @@ void vcpu_timer_destroy(struct vcpu *v)
kill_timer(&v->arch.vtimer.timer);
  }
+
+void vtimer_set_timer(struct vtimer *t, const uint64_t ticks)
+{
+    s_time_t expires = ticks_to_ns(ticks - boot_clock_cycles);
boot_clock_cycles is known to just Xen. If the guest provided input is an
absolute value, how would that work across migration? Doesn't there need
to be a guest-specific bias instead?

I think that I don't understand fully your questions, but it sounds like it is 
a job
for htimedelta register.


+    vcpu_unset_interrupt(t->v, IRQ_VS_TIMER);
+
+    /*
+     * According to the RISC-V sbi spec:
+     *   If the supervisor wishes to clear the timer interrupt without
+     *   scheduling the next timer event, it can either request a timer
+     *   interrupt infinitely far into the future (i.e., (uint64_t)-1),
+     *   or it can instead mask the timer interrupt by clearing sie.STIE CSR
+     *   bit.
+     */
And SBI is the only way to set the expiry value? No CSR access? (Question
also concerns the unconditional vcpu_unset_interrupt() above.)

If we don't have SSTC extension support then I suppose yes, as CSR_MI{E,P} could
be accessed only from M-mode:
 (code from OpenSBI)
void sbi_timer_event_start(u64 next_event)
{
        sbi_pmu_ctr_incr_fw(SBI_PMU_FW_SET_TIMER);

        /**
         * Update the stimecmp directly if available. This allows
         * the older software to leverage sstc extension on newer hardware.
         */
        if (sbi_hart_has_extension(sbi_scratch_thishart_ptr(), 
SBI_HART_EXT_SSTC)) {
#if __riscv_xlen == 32
                csr_write(CSR_STIMECMP, next_event & 0xFFFFFFFF);
                csr_write(CSR_STIMECMPH, next_event >> 32);
#else
                csr_write(CSR_STIMECMP, next_event);
#endif
        } else if (timer_dev && timer_dev->timer_event_start) {
                timer_dev->timer_event_start(next_event);
                csr_clear(CSR_MIP, MIP_STIP);
        }
        csr_set(CSR_MIE, MIP_MTIP);
}


+    if ( ticks == ((uint64_t)~0ULL) )
Nit: With the cast you won't need the ULL suffix.

+    {
+        stop_timer(&t->timer);
+
+        return;
+    }
+
+    set_timer(&t->timer, expires);
See the handling of VCPUOP_set_singleshot_timer for what you may want to
do if the expiry asked for is (perhaps just very slightly) into the past.

I got an idea why we want to check if "expires" already expired, but ...

There you'll also find a use of migrate_timer(), which you will want to
at least consider using here as well.

... I don't get why we want to migrate timer before set_timer() here.
Could you please explain that?

Thanks.

~ Oleksii




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.