[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 18/28] ARM: vITS: handle CLEAR command





On 05/23/2017 06:24 PM, Andre Przywara wrote:
Hi,

Hi Andre,


On 17/05/17 18:45, Julien Grall wrote:
Hi Andre,

On 11/05/17 18:53, Andre Przywara wrote:
This introduces the ITS command handler for the CLEAR command, which
clears the pending state of an LPI.
This removes a not-yet injected, but already queued IRQ from a VCPU.
As read_itte() is now eventually used, we add the static keyword.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
---
 xen/arch/arm/vgic-v3-its.c | 59
++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 57 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 8f1c217..8a200e9 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -52,6 +52,7 @@
  */
 struct virt_its {
     struct domain *d;
+    paddr_t doorbell_address;
     unsigned int devid_bits;
     unsigned int evid_bits;
     spinlock_t vcmd_lock;       /* Protects the virtual command
buffer, which */
@@ -251,8 +252,8 @@ static bool read_itte_locked(struct virt_its *its,
uint32_t devid,
  * This function takes care of the locking by taking the its_lock
itself, so
  * a caller shall not hold this. Before returning, the lock is
dropped again.
  */
-bool read_itte(struct virt_its *its, uint32_t devid, uint32_t evid,
-               struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
+static bool read_itte(struct virt_its *its, uint32_t devid, uint32_t
evid,
+                      struct vcpu **vcpu_ptr, uint32_t *vlpi_ptr)
 {
     bool ret;

@@ -362,6 +363,57 @@ static int its_handle_mapc(struct virt_its *its,
uint64_t *cmdptr)
     return 0;
 }

+/*
+ * CLEAR removes the pending state from an LPI. */
+static int its_handle_clear(struct virt_its *its, uint64_t *cmdptr)
+{
+    uint32_t devid = its_cmd_get_deviceid(cmdptr);
+    uint32_t eventid = its_cmd_get_id(cmdptr);
+    struct pending_irq *p;
+    struct vcpu *vcpu;
+    uint32_t vlpi;
+    unsigned long flags;
+    int ret = -1;
+
+    spin_lock(&its->its_lock);
+
+    /* Translate the DevID/EvID pair into a vCPU/vLPI pair. */
+    if ( !read_itte_locked(its, devid, eventid, &vcpu, &vlpi) )
+        goto out_unlock;
+
+    p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
+                                        devid, eventid);
+    /* Protect against an invalid LPI number. */
+    if ( unlikely(!p) )
+        goto out_unlock;
+
+    spin_lock_irqsave(&vcpu->arch.vgic.lock, flags);

My comment in patch #9 about crafting the memory handed over to the ITS
applies here too.

+
+    /*
+     * If the LPI is already visible on the guest, it is too late to
+     * clear the pending state. However this is a benign race that can
+     * happen on real hardware, too: If the LPI has already been
forwarded
+     * to a CPU interface, a CLEAR request reaching the redistributor
has
+     * no effect on that LPI anymore. Since LPIs are edge triggered and
+     * have no active state, we don't need to care about this here.
+     */
+    if ( !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+    {
+        /* Remove a pending, but not yet injected guest IRQ. */
+        clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
+        list_del_init(&p->inflight);
+        list_del_init(&p->lr_queue);

On the previous version I was against this open-coding of
gic_remove_from_queues and instead rework the function.

Well, I consider gic_remove_from_queues() somewhat broken:
- It should be called vgic_remove... and live in vgic.c, because it
deals with the virtual side only.

Then append a patch for that and ...

- The plural in the name is wrong, since it only removes the IRQ from
lr_pending, not inflight.

... that.

- vgic_migrate_irq removes an IRQ from both queues as well, and doesn't
use the function (for the same reason).

The existing code may not use it, but it is not a reason to continue to open-code it.

So to make it usable in our case, I'd need to either open code the
inflight removal here (which would make calling this function a bit
pointless) or add that to the function, but remove the existing caller.
Looks like a can of worms to me and a distraction from the actual goal
of getting the ITS in place.

Not necessarily... you could extend the prototype to specify how much you want to clean.

So I will surely address this with the VGIC rework (possibly removing
this function altogether), but would like to avoid doing this rework
*now*. To catch all users of the list I would need to grep for inflight
and lr_pending anyway, so one more "open-coded" place is not a big deal.

Please stop saying this is "not a big-deal". It is not helpful to get this series in shape that makes the maintainers happy enough to merge it.

In this case, I didn't ask to remove all the open-coded place but asked to not add more.


It still does not make any sense to me because if one day someone
decides to update gic_remove_from_queues (such as you because you are
going to rework the vGIC), he will have to remember that you open-coded
in MOVE because you didn't want to touch the common code.

As I mentioned above this is the same situation for vgic_migrate_irq()
already.

This is for me a call to fix it rather than trying to make the situation much worse...


Common code is not set in stone. The goal is to abstract all the issues
to make easier to propagate change. So please address this comment.

I clearly understand this and am all for fixing this, but I don't
believe the ITS series is the place to do this. In fact I don't want to
add more code to this series.

Can you stop deferring everything to after the ITS merge? I understand why some of the tasks can be deferred because the support is not strictly needed here or would be too difficult. In this case, you haven't convinced me this cannot be done here.

If gic_remove_from_queues would keep up the promise its name gives, I
would love to use it, but it doesn't, so ...

Then fix it and rename it.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.