[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 04/10] ARM: vGIC: add struct pending_irq locking



Hi Andre,

On 05/05/2017 10:02 AM, Andre Przywara wrote:
Hi,

On 04/05/17 17:21, Julien Grall wrote:
Hi Andre,

On 04/05/17 16:31, Andre Przywara wrote:
Introduce the proper locking sequence for the new pending_irq lock.
This takes the lock around multiple accesses to struct members,
also makes sure we observe the locking order (VGIC VCPU lock first,
then pending_irq lock).

This locking order should be explained in the code. Likely in vgic.h.


Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
---
 xen/arch/arm/gic.c  | 26 ++++++++++++++++++++++++++
 xen/arch/arm/vgic.c | 12 +++++++++++-
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 67375a2..e175e9b 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -351,6 +351,7 @@ void gic_disable_cpu(void)
 static inline void gic_set_lr(int lr, struct pending_irq *p,
                               unsigned int state)
 {
+    ASSERT(spin_is_locked(&p->lock));
     ASSERT(!local_irq_is_enabled());

     gic_hw_ops->update_lr(lr, p, state);
@@ -413,6 +414,7 @@ void gic_raise_guest_irq(struct vcpu *v, struct
pending_irq *p)
     unsigned int nr_lrs = gic_hw_ops->info->nr_lrs;

     ASSERT(spin_is_locked(&v->arch.vgic.lock));
+    ASSERT(spin_is_locked(&p->lock));

     if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
     {
@@ -439,6 +441,7 @@ static void gic_update_one_lr(struct vcpu *v, int i)
     gic_hw_ops->read_lr(i, &lr_val);
     irq = lr_val.virq;
     p = irq_to_pending(v, irq);
+    spin_lock(&p->lock);
     if ( lr_val.state & GICH_LR_ACTIVE )
     {
         set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
@@ -495,6 +498,7 @@ static void gic_update_one_lr(struct vcpu *v, int i)
             }
         }
     }
+    spin_unlock(&p->lock);
 }

 void gic_clear_lrs(struct vcpu *v)
@@ -545,14 +549,30 @@ static void gic_restore_pending_irqs(struct vcpu
*v)
             /* No more free LRs: find a lower priority irq to evict */
             list_for_each_entry_reverse( p_r, inflight_r, inflight )
             {
+                if ( p_r->irq < p->irq )
+                {
+                    spin_lock(&p_r->lock);
+                    spin_lock(&p->lock);
+                }
+                else
+                {
+                    spin_lock(&p->lock);
+                    spin_lock(&p_r->lock);
+                }

Please explain in the commit message and the code why this locking order.

                 if ( p_r->priority == p->priority )
+                {
+                    spin_unlock(&p->lock);
+                    spin_unlock(&p_r->lock);
                     goto out;
+                }
                 if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) &&
                      !test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
                     goto found;
             }
             /* We didn't find a victim this time, and we won't next
              * time, so quit */
+            spin_unlock(&p->lock);
+            spin_unlock(&p_r->lock);
             goto out;

 found:
@@ -562,12 +582,18 @@ found:
             clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
             gic_add_to_lr_pending(v, p_r);
             inflight_r = &p_r->inflight;
+
+            spin_unlock(&p_r->lock);
         }
+        else
+            spin_lock(&p->lock);

         gic_set_lr(lr, p, GICH_LR_PENDING);
         list_del_init(&p->lr_queue);
         set_bit(lr, &this_cpu(lr_mask));

+        spin_unlock(&p->lock);
+
         /* We can only evict nr_lrs entries */
         lrs--;
         if ( lrs == 0 )
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index f4ae454..44363bb 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -356,11 +356,16 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t
r, int n)
     while ( (i = find_next_bit(&mask, 32, i)) < 32 ) {
         irq = i + (32 * n);
         v_target = vgic_get_target_vcpu(v, irq);
+
+        spin_lock_irqsave(&v_target->arch.vgic.lock, flags);
         p = irq_to_pending(v_target, irq);
+        spin_lock(&p->lock);
+
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        spin_lock_irqsave(&v_target->arch.vgic.lock, flags);

IHMO, this should belong to a separate patch as not strictly relate to
this one.

I don't think it makes much sense to split this, as this change is
motivated by the introduction of the pending_irq lock, and we have to
move the VGIC VCPU lock due to the locking order.

My point here is this change does not require to be specifically in this patch. And moving it in a separate patch would allow you to justify this change and simplify the patch.



+

Spurious change.

Well, that helps to structure the code.

I call that code clean-up and should be moved in separate patch. A general rule to make the patches as small as possible. This makes easier to review and justify changes.


         if ( !list_empty(&p->inflight) &&
!test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
             gic_raise_guest_irq(v_target, p);
+        spin_unlock(&p->lock);
         spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags);
         if ( p->desc != NULL )
         {
@@ -482,10 +487,12 @@ void vgic_vcpu_inject_irq(struct vcpu *v,
unsigned int virq)
         return;
     }

+    spin_lock(&n->lock);
     set_bit(GIC_IRQ_GUEST_QUEUED, &n->status);

     if ( !list_empty(&n->inflight) )
     {
+        spin_unlock(&n->lock);
         gic_raise_inflight_irq(v, n);

Any reason to not code gic_raise_inflight_irq with the lock? This would
also simplify quite a lot the function and avoid unlock in pretty much
all the exit path.

gic_raise_inflight_irq() calls gic_update_one_lr(), which works out the
pending_irq from the LR number and then takes the lock.
Yes, there are other ways of solving this:
- remove gic_raise_inflight_irq() at all
- rework gic_update_one_lr() to take a (locked) pending_irq already

Both approaches have other issues that pop up, I think this solution
here is the least hideous and least intrusive.
Frankly I believe that removing gic_raise_inflight_irq() altogether is
the best solution, but this requires more rework (which Stefano hinted
at not liking too much) and I wanted to keep this series as simple as
possible.

The problem I can see with this patch series is it is not going far enough. Without the rest of the vgic rework, it is hard to say whether the locking order or placement of vgic_{get,put}_* are valid.

For instance, as we discussed yesterday face to face. In this series you are suggesting the locking order:
     1) vgic vcpu lock
     2) pending_irq lock

Because of this locking order, you may need in some places to take the pending_irq lock temporarily, drop it then take the locking in the correct order.

I do believe that we can limit the use of vgic vcpu lock (it mostly protect the list in pending_irq) and we could use the locking order:
     1) pending_irq lock
     2) vgic vcpu lock

This would also simplify the locking afterwards. But the correct ordering can only be decided with a full vGIC rework in mind.

What I want to avoid is getting merged a lock ordering today, but in a couple of weeks we have a new series with a different locking ordering.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.