[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v6 for-xen-4.5 3/3] dpci: Replace tasklet with an softirq (v6)



The existing tasklet mechanism has a single global
spinlock that is taken every-time the global list
is touched. And we use this lock quite a lot - when
we call do_tasklet_work which is called via an softirq
and from the idle loop. We take the lock on any
operation on the tasklet_list.

The problem we are facing is that there are quite a lot of
tasklets scheduled. The most common one that is invoked is
the one injecting the VIRQ_TIMER in the guest. Guests
are not insane and don't set the one-shot or periodic
clocks to be in sub 1ms intervals (causing said tasklet
to be scheduled for such small intervalls).

The problem appears when PCI passthrough devices are used
over many sockets and we have an mix of heavy-interrupt
guests and idle guests. The idle guests end up seeing
1/10 of its RUNNING timeslice eaten by the hypervisor
(and 40% steal time).

The mechanism by which we inject PCI interrupts is by
hvm_do_IRQ_dpci which schedules the hvm_dirq_assist
tasklet every time an interrupt is received.
The callchain is:

_asm_vmexit_handler
 -> vmx_vmexit_handler
    ->vmx_do_extint
        -> do_IRQ
            -> __do_IRQ_guest
                -> hvm_do_IRQ_dpci
                   tasklet_schedule(&dpci->dirq_tasklet);
                   [takes lock to put the tasklet on]

[later on the schedule_tail is invoked which is 'vmx_do_resume']

vmx_do_resume
 -> vmx_asm_do_vmentry
        -> call vmx_intr_assist
          -> vmx_process_softirqs
            -> do_softirq
              [executes the tasklet function, takes the
               lock again]

While on other CPUs they might be sitting in a idle loop
and invoked to deliver an VIRQ_TIMER, which also ends
up taking the lock twice: first to schedule the
v->arch.hvm_vcpu.assert_evtchn_irq_tasklet (accounted to
the guests' BLOCKED_state); then to execute it - which is
accounted for in the guest's RUNTIME_state.

The end result is that on a 8 socket machine with
PCI passthrough, where four sockets are busy with interrupts,
and the other sockets have idle guests - we end up with
the idle guests having around 40% steal time and 1/10
of its timeslice (3ms out of 30 ms) being tied up
taking the lock. The latency of the PCI interrupts delieved
to guest is also hindered.

With this patch the problem disappears completly.
That is removing the lock for the PCI passthrough use-case
(the 'hvm_dirq_assist' case) by not using tasklets at all.

The patch is simple - instead of scheduling an tasklet
we schedule our own softirq - HVM_DPCI_SOFTIRQ, which will
take care of running 'hvm_dirq_assist'. The information we need
on each CPU is which 'struct hvm_pirq_dpci' structure the
'hvm_dirq_assist' needs to run on. That is simple solved by
threading the 'struct hvm_pirq_dpci' through a linked list.
The rule of only running one 'hvm_dirq_assist' for only
one 'hvm_pirq_dpci' is also preserved by having
'schedule_dpci_for' ignore any subsequent calls for an domain
which has already been scheduled.

Since we are using 'hvm_pirq_dpci' structure it is setup in
'pt_irq_create_bind' - via 'pt_pirq_reset' we reset the
list for our usage.

The mechanism to tear it down is more complex as there
are three ways it can be executed. To make it simpler
everything revolves around 'pt_pirq_softirq_active'. If it
returns -EGAIN that means there is an outstanding softirq
that needs to finish running before we can continue tearing
down. With that in mind:

a) pci_clean_dpci_irq. This gets called when the guest is
   being destroyed. We end up calling 'pt_pirq_softirq_active'
   to see if it is OK to continue the destruction.

   The scenarios in which the 'struct pirq' (and subsequently
   the 'hvm_pirq_dpci') gets destroyed is when:

   - guest did not use the pirq at all after setup.
   - guest did use pirq, but decided to mask and left it in that
     state.
   - guest did use pirq, but crashed.

   In all of those scenarios we end up calling
   'pt_pirq_softirq_active' to check if the softirq is still
   active. Read below on the 'pt_pirq_softirq_active' loop.

b) pt_irq_destroy_bind (guest disables the MSI). We double-check
   that the softirq has run by piggy-backing on the existing
   'pirq_cleanup_check' mechanism which calls 'pt_pirq_cleanup_check'.
   We add the extra call to 'pt_pirq_softirq_active' in
   'pt_pirq_cleanup_check'.

   NOTE: Guests that use event channels unbind first the
   event channel from PIRQs, so the 'pt_pirq_cleanup_check'
   won't be called as eventch is set to zero. In that case
   we either clean it up via the a) or c) mechanism.

   There is an extra scenario regardless of eventch being
   set or not: the guest did 'pt_irq_destroy_bind' while an
   interrupt was triggered and softirq was scheduled (but had not
   been run). It is OK to still run the softirq as
   hvm_dirq_assist won't do anything (as the flags are
   set to zero).

c) pt_irq_create_bind (not a typo). The scenarios are:

     - guest disables the MSI and then enables it
       (rmmod and modprobe in a loop). We call 'pt_pirq_reset'
       which checks to see if the softirq has been scheduled.
       Imagine the 'b)' with interrupts in flight and c) getting
       called in a loop.

     - we hit once the error paths in 'pt_irq_create_bind' while
       an interrupt was triggered and softirq was scheduled.

In all of those cases we will spin up to a second on
'pt_pirq_softirq_active' (at the start of the 'pt_irq_create_bind')
with the event_lock spinlock dropped and calling
'process_pending_softirqs'. hvm_dirq_assist will be executed and
then the softirq will clear 'mapping' which signals that that we
can re-use the 'hvm_pirq_dpci' structure.

Note that the 'flags' variable is cleared at that
point, so hvm_dirq_assist won't actually do anything.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Suggested-by: Jan Beulich <JBeulich@xxxxxxxx>

---
v2: On top of ref-cnts also have wait loop for the outstanding
    'struct domain' that need to be processed.
v3: Add -ERETRY, fix up StyleGuide issues
v4: Clean it up mode, redo per_cpu, this_cpu logic
v5: Instead of threading struct domain, use hvm_pirq_dpci.
v6: Ditch the 'state' bit, expand description, simplify
    softirq and teardown sequence.
---
 xen/arch/x86/domain.c         |   4 +-
 xen/drivers/passthrough/io.c  | 153 ++++++++++++++++++++++++++++++++++++++----
 xen/drivers/passthrough/pci.c |  31 ++++++---
 xen/include/xen/hvm/irq.h     |   3 +-
 xen/include/xen/pci.h         |   2 +-
 xen/include/xen/softirq.h     |   3 +
 6 files changed, 172 insertions(+), 24 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 7b1dfe6..12bf241 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1924,7 +1924,9 @@ int domain_relinquish_resources(struct domain *d)
     switch ( d->arch.relmem )
     {
     case RELMEM_not_started:
-        pci_release_devices(d);
+        ret = pci_release_devices(d);
+        if ( ret )
+            return ret;
 
         /* Tear down paging-assistance stuff. */
         paging_teardown(d);
diff --git a/xen/drivers/passthrough/io.c b/xen/drivers/passthrough/io.c
index d2a1214..4b2aa77 100644
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -20,14 +20,58 @@
 
 #include <xen/event.h>
 #include <xen/iommu.h>
+#include <xen/cpu.h>
 #include <xen/irq.h>
 #include <asm/hvm/irq.h>
 #include <asm/hvm/iommu.h>
 #include <asm/hvm/support.h>
 #include <xen/hvm/irq.h>
-#include <xen/tasklet.h>
 
-static void hvm_dirq_assist(unsigned long _d);
+static DEFINE_PER_CPU(struct list_head, dpci_list);
+
+/*
+ * Should only be called from hvm_do_IRQ_dpci. We use the
+ * 'masked' as an gate to thwart multiple interrupts.
+ *
+ * The 'masked' is cleared by 'softirq_dpci' when it has
+ * completed executing 'hvm_dirq_assist'.
+ *
+ */
+static void schedule_softirq_for(struct hvm_pirq_dpci *pirq_dpci)
+{
+    unsigned long flags;
+
+    if ( pirq_dpci->masked )
+        return;
+
+    pirq_dpci->masked = 1;
+    get_knownalive_domain(pirq_dpci->dom); /* To be put by 'dpci_softirq' */
+
+    local_irq_save(flags);
+    list_add_tail(&pirq_dpci->softirq_list, &this_cpu(dpci_list));
+    local_irq_restore(flags);
+
+    raise_softirq(HVM_DPCI_SOFTIRQ);
+}
+
+/*
+ * If we are racing with softirq_dpci (masked is still set) we return
+ * -EAGAIN. Otherwise we return 0.
+ *
+ *  If it is -EAGAIN, it is the callers responsibility to kick the softirq
+ *  (with the event_lock dropped).
+ */
+int pt_pirq_softirq_active(struct hvm_pirq_dpci *pirq_dpci)
+{
+    if ( pirq_dpci->masked )
+        return -EAGAIN;
+    /*
+     * If in the future we would call 'schedule_softirq_for' right away
+     * after 'pt_pirq_softirq_active' we MUST reset the list (otherwise it
+     * might have stale data).
+     */
+    return 0;
+}
 
 bool_t pt_irq_need_timer(uint32_t flags)
 {
@@ -104,9 +148,20 @@ void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci)
  *
  * As such on every 'pt_irq_create_bind' call we MUST reset the values.
  */
-static void pt_pirq_reset(struct domain *d, struct hvm_pirq_dpci *dpci)
+static int pt_pirq_reset(struct domain *d, struct hvm_pirq_dpci *dpci)
 {
+    /*
+     * We MUST check for this condition as the softirq could be scheduled
+     * and hasn't run yet. As such we MUST delay this reset until it has
+     * completed its job.
+     */
+    if ( !pt_pirq_cleanup_check(dpci) )
+            return -EAGAIN;
+
+    INIT_LIST_HEAD(&dpci->softirq_list);
     dpci->dom = d;
+
+    return 0;
 }
 
 int pt_irq_create_bind(
@@ -116,7 +171,9 @@ int pt_irq_create_bind(
     struct hvm_pirq_dpci *pirq_dpci;
     struct pirq *info;
     int rc, pirq = pt_irq_bind->machine_irq;
+    s_time_t start = NOW();
 
+ restart:
     if ( pirq < 0 || pirq >= d->nr_pirqs )
         return -EINVAL;
 
@@ -146,7 +203,17 @@ int pt_irq_create_bind(
         return -ENOMEM;
     }
     pirq_dpci = pirq_dpci(info);
-    pt_pirq_reset(d, pirq_dpci);
+    /* A crude 'while' loop with us dropping the spinlock and giving
+     * the softirq_dpci a chance to run. We do this up to one second
+     * at which point we give up. */
+    if ( pt_pirq_reset(d, pirq_dpci) )
+    {
+        spin_unlock(&d->event_lock);
+        process_pending_softirqs();
+        if ( ( NOW() - start ) >> 30 )
+            return -EAGAIN;
+        goto restart;
+    }
 
     switch ( pt_irq_bind->irq_type )
     {
@@ -429,14 +496,12 @@ void pt_pirq_init(struct domain *d, struct hvm_pirq_dpci 
*dpci)
 {
     INIT_LIST_HEAD(&dpci->digl_list);
     dpci->gmsi.dest_vcpu_id = -1;
-    softirq_tasklet_init(&dpci->tasklet, hvm_dirq_assist, (unsigned long)dpci);
 }
 
 bool_t pt_pirq_cleanup_check(struct hvm_pirq_dpci *dpci)
 {
-    if ( !dpci->flags )
+    if ( !dpci->flags && !pt_pirq_softirq_active(dpci) )
     {
-        tasklet_kill(&dpci->tasklet);
         dpci->dom = NULL;
         return 1;
     }
@@ -479,8 +544,7 @@ int hvm_do_IRQ_dpci(struct domain *d, struct pirq *pirq)
          !(pirq_dpci->flags & HVM_IRQ_DPCI_MAPPED) )
         return 0;
 
-    pirq_dpci->masked = 1;
-    tasklet_schedule(&pirq_dpci->tasklet);
+    schedule_softirq_for(pirq_dpci);
     return 1;
 }
 
@@ -534,15 +598,14 @@ void hvm_dpci_msi_eoi(struct domain *d, int vector)
     spin_unlock(&d->event_lock);
 }
 
-static void hvm_dirq_assist(unsigned long _d)
+static void hvm_dirq_assist(struct hvm_pirq_dpci *pirq_dpci)
 {
-    struct hvm_pirq_dpci *pirq_dpci = (struct hvm_pirq_dpci *)_d;
     struct domain *d = pirq_dpci->dom;
 
     ASSERT(d->arch.hvm_domain.irq.dpci);
 
     spin_lock(&d->event_lock);
-    if ( test_and_clear_bool(pirq_dpci->masked) )
+    if ( pirq_dpci->masked )
     {
         struct pirq *pirq = dpci_pirq(pirq_dpci);
         const struct dev_intx_gsi_link *digl;
@@ -640,3 +703,69 @@ void hvm_dpci_eoi(struct domain *d, unsigned int guest_gsi,
 unlock:
     spin_unlock(&d->event_lock);
 }
+
+static void dpci_softirq(void)
+{
+    struct hvm_pirq_dpci *pirq_dpci;
+    unsigned int cpu = smp_processor_id();
+    LIST_HEAD(our_list);
+
+    local_irq_disable();
+    list_splice_init(&per_cpu(dpci_list, cpu), &our_list);
+    local_irq_enable();
+
+    while ( !list_empty(&our_list) )
+    {
+        pirq_dpci = list_entry(our_list.next, struct hvm_pirq_dpci, 
softirq_list);
+        list_del(&pirq_dpci->softirq_list);
+
+        hvm_dirq_assist(pirq_dpci);
+
+        put_domain(pirq_dpci->dom);
+        pirq_dpci->masked = 0;
+    }
+}
+static int cpu_callback(
+    struct notifier_block *nfb, unsigned long action, void *hcpu)
+{
+    unsigned int cpu = (unsigned long)hcpu;
+
+    switch ( action )
+    {
+    case CPU_UP_PREPARE:
+        INIT_LIST_HEAD(&per_cpu(dpci_list, cpu));
+        break;
+    case CPU_UP_CANCELED:
+    case CPU_DEAD:
+        /*
+         * On CPU_DYING this callback is called (on the CPU that is dying)
+         * with an possible HVM_DPIC_SOFTIRQ pending - at which point we can
+         * clear out any outstanding domains (by the virtue of the idle loop
+         * calling the softirq later). In CPU_DEAD case the CPU is deaf and
+         * there are no pending softirqs for us to handle so we can chill.
+         */
+        ASSERT(list_empty(&per_cpu(dpci_list, cpu)));
+        break;
+    default:
+        break;
+    }
+
+    return NOTIFY_DONE;
+}
+
+static struct notifier_block cpu_nfb = {
+    .notifier_call = cpu_callback,
+};
+
+static int __init setup_dpci_softirq(void)
+{
+    unsigned int cpu;
+
+    for_each_online_cpu(cpu)
+        INIT_LIST_HEAD(&per_cpu(dpci_list, cpu));
+
+    open_softirq(HVM_DPCI_SOFTIRQ, dpci_softirq);
+    register_cpu_notifier(&cpu_nfb);
+    return 0;
+}
+__initcall(setup_dpci_softirq);
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 81e8a3a..d147189 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -767,40 +767,51 @@ static int pci_clean_dpci_irq(struct domain *d,
         xfree(digl);
     }
 
-    tasklet_kill(&pirq_dpci->tasklet);
-
-    return 0;
+    return pt_pirq_softirq_active(pirq_dpci);
 }
 
-static void pci_clean_dpci_irqs(struct domain *d)
+static int pci_clean_dpci_irqs(struct domain *d)
 {
     struct hvm_irq_dpci *hvm_irq_dpci = NULL;
 
     if ( !iommu_enabled )
-        return;
+        return -ESRCH;
 
     if ( !is_hvm_domain(d) )
-        return;
+        return -EINVAL;
 
     spin_lock(&d->event_lock);
     hvm_irq_dpci = domain_get_irq_dpci(d);
     if ( hvm_irq_dpci != NULL )
     {
-        pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
+        int ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
+
+        if ( ret )
+        {
+            spin_unlock(&d->event_lock);
+            return ret;
+        }
 
         d->arch.hvm_domain.irq.dpci = NULL;
         free_hvm_irq_dpci(hvm_irq_dpci);
     }
     spin_unlock(&d->event_lock);
+    return 0;
 }
 
-void pci_release_devices(struct domain *d)
+int pci_release_devices(struct domain *d)
 {
     struct pci_dev *pdev;
     u8 bus, devfn;
+    int ret;
 
     spin_lock(&pcidevs_lock);
-    pci_clean_dpci_irqs(d);
+    ret = pci_clean_dpci_irqs(d);
+    if ( ret == -EAGAIN )
+    {
+        spin_unlock(&pcidevs_lock);
+        return ret;
+    }
     while ( (pdev = pci_get_pdev_by_domain(d, -1, -1, -1)) )
     {
         bus = pdev->bus;
@@ -811,6 +822,8 @@ void pci_release_devices(struct domain *d)
                    PCI_SLOT(devfn), PCI_FUNC(devfn));
     }
     spin_unlock(&pcidevs_lock);
+
+    return 0;
 }
 
 #define PCI_CLASS_BRIDGE_HOST    0x0600
diff --git a/xen/include/xen/hvm/irq.h b/xen/include/xen/hvm/irq.h
index 94a550a..4fc6dcf 100644
--- a/xen/include/xen/hvm/irq.h
+++ b/xen/include/xen/hvm/irq.h
@@ -99,7 +99,7 @@ struct hvm_pirq_dpci {
     struct domain *dom;
     struct hvm_gmsi_info gmsi;
     struct timer timer;
-    struct tasklet tasklet;
+    struct list_head softirq_list;
 };
 
 void pt_pirq_init(struct domain *, struct hvm_pirq_dpci *);
@@ -109,6 +109,7 @@ int pt_pirq_iterate(struct domain *d,
                               struct hvm_pirq_dpci *, void *arg),
                     void *arg);
 
+int pt_pirq_softirq_active(struct hvm_pirq_dpci *);
 /* Modify state of a PCI INTx wire. */
 void hvm_pci_intx_assert(
     struct domain *d, unsigned int device, unsigned int intx);
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 91520bc..5f295f3 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -99,7 +99,7 @@ struct pci_dev *pci_lock_domain_pdev(
 
 void setup_hwdom_pci_devices(struct domain *,
                             int (*)(u8 devfn, struct pci_dev *));
-void pci_release_devices(struct domain *d);
+int pci_release_devices(struct domain *d);
 int pci_add_segment(u16 seg);
 const unsigned long *pci_get_ro_map(u16 seg);
 int pci_add_device(u16 seg, u8 bus, u8 devfn, const struct pci_dev_info *);
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index 0895a16..f3da5df 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -8,6 +8,9 @@ enum {
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
     RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
+#ifdef HAS_PASSTHROUGH
+    HVM_DPCI_SOFTIRQ,
+#endif
     NR_COMMON_SOFTIRQS
 };
 
-- 
1.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.