[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC 06/10] xen/arm: gic: Allow the LRs to be cleared lazily



Currently the LRs are cleared every time we entered in Xen from a guest
before any action is taken by the hypervisor. This requiring to reload
the guest registers from the stack when an hypercall is handled for
instance.

However, we only need to clear the LRs for the following actions:
    - An interrupt is received and injected
    - Checking hypercall preemption
    - Before re-entering in the guest

A new flag has been introduced per vCPU in vgic.flags to know whether
the LRs has been cleared since the vCPU has been running. The flag is
always cleared before switching back to the guest vCPU.

As clearing the LRs is now lazy, we have to check if they are cleared
every time we enter in the hypervisor and not only when we entered from
a lower exception state.

Note that actually no one is taking advantage of clearing the LRs
lazily. A follow-up patch will enable this possibility.

Signed-off-by: Julien Grall <julien.grall@xxxxxxxxxx>
---
 xen/arch/arm/gic.c           | 11 +++++++++++
 xen/arch/arm/traps.c         |  9 ++++++++-
 xen/include/asm-arm/domain.h |  7 ++++++-
 3 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 37e579b..5d70251 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -476,6 +476,13 @@ void gic_clear_lrs(struct vcpu *v)
     if ( is_idle_vcpu(v) )
         return;
 
+    /*
+     * Check the LRs has already been cleared for this vCPU since the
+     * last time it has running.
+     */
+    if ( test_and_set_bit(_VGIC_LRS_CLEARED, &v->arch.vgic.flags) )
+        return;
+
     gic_hw_ops->update_hcr_status(GICH_HCR_UIE, 0);
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
@@ -566,6 +573,8 @@ int gic_events_need_delivery(void)
     int active_priority;
     int rc = 0;
 
+    gic_clear_lrs(v);
+
     mask_priority = gic_hw_ops->read_vmcr_priority();
     active_priority = find_next_bit(&apr, 32, 0);
 
@@ -602,6 +611,8 @@ void gic_inject(void)
 
     gic_restore_pending_irqs(v);
 
+    clear_bit(_VGIC_LRS_CLEARED, &v->arch.vgic.flags);
+
     if ( !list_empty(&v->arch.vgic.lr_pending) && lr_all_full() )
         gic_hw_ops->update_hcr_status(GICH_HCR_UIE, 1);
 }
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index c49bd3f..f222d96 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2451,7 +2451,12 @@ bad_data_abort:
 
 static void enter_hypervisor_head(struct cpu_user_regs *regs)
 {
-    if ( guest_mode(regs) )
+    /*
+     * enter_hypervisor_head is called by most of the traps generated by the
+     * processor. It could be taken from the hypervisor or the guest.
+     * However current is only valid after Xen has finished to boot.
+     */
+    if ( likely(system_state > SYS_STATE_active) )
         gic_clear_lrs(current);
 }
 
@@ -2602,6 +2607,8 @@ asmlinkage void do_trap_fiq(struct cpu_user_regs *regs)
 
 asmlinkage void leave_hypervisor_tail(void)
 {
+    gic_clear_lrs(current);
+
     while (1)
     {
         local_irq_disable();
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index e7e40da..401b4d0 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -245,7 +245,12 @@ struct arch_vcpu
 
         /* GICv3: redistributor base and flags for this vCPU */
         paddr_t rdist_base;
-#define VGIC_V3_RDIST_LAST  (1 << 0)        /* last vCPU of the rdist */
+/* Last vCPU of the rdist */
+#define _VGIC_V3_RDIST_LAST (0)
+#define VGIC_V3_RDIST_LAST (1 << _VGIC_V3_RDIST_LAST)
+/* LRs have been cleared for this vCPU */
+#define _VGIC_LRS_CLEARED   (1)
+#define VGIC_LRS_CLEARED    (1 << _VGIC_LRS_CLEARED)
         uint8_t flags;
     } vgic;
 
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.