[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 5/8] xen/arm/gic: Allow routing/removing interrupt to running VMs


  • To: Julien Grall <julien@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Henry Wang <xin.wang2@xxxxxxx>
  • Date: Wed, 22 May 2024 09:07:30 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hlfI7zv2QHpWmE4HOs84OGGn0X8XL8jHHn0FUak/OOE=; b=WAkY5En4vMB70pnY6fA4ZT+bRut5FhytJ9pdtgAndTmlMmrRVqGKbKLbgUCYGNPq7KVbe3y4U3/mmGo/Wb0oFntGlnxoq0AQjGEe7ffeQU6MwWvrwq0rKplSwllibX1wT3Fbid+ayRnOXae6XFCleQ9JtJxHMM6s3o4lp96hbRY1pWewmImyLAkJb0iCmUY9n7s2RAekHzvqjxfHVDaRg5CIgGoD9woXAoLOEFyGNvY+N9+UdtAdyL77uYS7FowmKNEsrjuO3+uuwyYmYnzGKxgls5s68UYTKrECadghQxRMqS7kiHdJacMUKKGeXUHlxQv5/okuqaeBTF9Ua0Mgxg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=U82u3ZDtxZzXBsbEzl2ox4DVPupL5M2ixAgpPJgBaiyyMPof2bRCWITVkwwQ6uK2rnjoZQzNusKxgHiPJ94vMVs9+4DHJAXL2NnRjCl4cKRdMn/NBkLfVUPpJjq4dKwJLHy+DVwMxKtw1XBaacm7s/0YQDAIqllzJOvOIjKcTcXEIl2lx3u6V74eXNf0X9kty8WntbdqPns2BlazLzvRuw5w42JBg28dqIjEsiwRJDzYc5V7TnVcNkOdoQsymqXjEMP7D/z6nf0iS08PzDYgH1Y0BSdCZoCKDPkDt64HQ+SBdqi4+6cZwID95uUCbnGgsCN6W98yrwgl++z7qd+KHA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, "Volodymyr Babchuk" <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Wed, 22 May 2024 01:08:03 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Julien,

On 5/21/2024 8:30 PM, Julien Grall wrote:
Hi,

On 21/05/2024 05:35, Henry Wang wrote:
diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c
index 56490dbc43..956c11ba13 100644
--- a/xen/arch/arm/gic-vgic.c
+++ b/xen/arch/arm/gic-vgic.c
@@ -439,24 +439,33 @@ int vgic_connect_hw_irq(struct domain *d, struct vcpu *v, unsigned int virq,         /* We are taking to rank lock to prevent parallel connections. */
      vgic_lock_rank(v_target, rank, flags);
+    spin_lock(&v_target->arch.vgic.lock);

I know this is what Stefano suggested, but v_target would point to the current affinity whereas the interrupt may be pending/active on the "previous" vCPU. So it is a little unclear whether v_target is the correct lock. Do you have more pointer to show this is correct?

No I think you are correct, we have discussed this in the initial version of this patch. Sorry.

I followed the way from that discussion to note down the vcpu ID and retrieve here, below is the diff, would this make sense to you?

diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c
index 956c11ba13..134ed4e107 100644
--- a/xen/arch/arm/gic-vgic.c
+++ b/xen/arch/arm/gic-vgic.c
@@ -439,7 +439,7 @@ int vgic_connect_hw_irq(struct domain *d, struct vcpu *v, unsigned int virq,

     /* We are taking to rank lock to prevent parallel connections. */
     vgic_lock_rank(v_target, rank, flags);
-    spin_lock(&v_target->arch.vgic.lock);
+ spin_lock(&d->vcpu[p->spi_vcpu_id]->arch.vgic.lock);

     if ( connect )
     {
@@ -465,7 +465,7 @@ int vgic_connect_hw_irq(struct domain *d, struct vcpu *v, unsigned int virq,
             p->desc = NULL;
     }

-    spin_unlock(&v_target->arch.vgic.lock);
+ spin_unlock(&d->vcpu[p->spi_vcpu_id]->arch.vgic.lock);
     vgic_unlock_rank(v_target, rank, flags);

     return ret;
diff --git a/xen/arch/arm/include/asm/vgic.h b/xen/arch/arm/include/asm/vgic.h
index 79b73a0dbb..f4075d3e75 100644
--- a/xen/arch/arm/include/asm/vgic.h
+++ b/xen/arch/arm/include/asm/vgic.h
@@ -85,6 +85,7 @@ struct pending_irq
     uint8_t priority;
     uint8_t lpi_priority;       /* Caches the priority if this is an LPI. */
     uint8_t lpi_vcpu_id;        /* The VCPU for an LPI. */
+    uint8_t spi_vcpu_id;        /* The VCPU for an SPI. */
     /* inflight is used to append instances of pending_irq to
      * vgic.inflight_irqs */
     struct list_head inflight;
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index c04fc4f83f..e852479f13 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -632,6 +632,7 @@ void vgic_inject_irq(struct domain *d, struct vcpu *v, unsigned int virq,
     }
     list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
 out:
+    n->spi_vcpu_id = v->vcpu_id;
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);

     /* we have a new higher priority irq, inject it into the guest */
     vcpu_kick(v);


Also, while looking at the locking, I noticed that we are not doing anything with GIC_IRQ_GUEST_MIGRATING. In gic_update_one_lr(), we seem to assume that if the flag is set, then p->desc cannot be NULL.

Can we reach vgic_connect_hw_irq() with the flag set?

I think even from the perspective of making the code extra safe, we should also check GIC_IRQ_GUEST_MIGRATING as the LR is allocated for this case. I will also add the check of GIC_IRQ_GUEST_MIGRATING here.

What about the other flags? Is this going to be a concern if we don't reset them?

I don't think so, if I am not mistaken, no LR will be allocated with other flags set.

Kind regards,
Henry


Cheers,





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.