[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: idle_loop: either deal with tasklets or go idle



On Fri, 2017-06-16 at 02:54 -0600, Jan Beulich wrote:
> > > > On 14.06.17 at 18:53, <dario.faggioli@xxxxxxxxxx> wrote:
> > 
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -112,12 +112,18 @@ static void play_dead(void)
> >  
> >  static void idle_loop(void)
> >  {
> > +    unsigned int cpu = smp_processor_id();
> > +
> >      for ( ; ; )
> >      {
> > -        if ( cpu_is_offline(smp_processor_id()) )
> > +        if ( cpu_is_offline(cpu) )
> >              play_dead();
> > -        (*pm_idle)();
> > -        do_tasklet();
> > +
> > +        /* Are we here for running vcpu context tasklets, or for
> > idling? */
> > +        if ( unlikely(tasklet_work_to_do(cpu)) )
> 
> I'm not really sure about the "unlikely()" here.
> 
It's basically already there, without this patch, at the very beginning
of do_tasklet():

    if ( likely(*work_to_do != (TASKLET_enqueued|TASKLET_scheduled)) )
        return;

Which is right the check that I moved in tasklet_work_to_do(), and as
you can see, it has the likely.

So, I fundamentally kept it for consistency with old code. I actually
think it does make sense, but I don't have a too strong opinion about
this.

> > +            do_tasklet(cpu);
> > +        else
> > +            (*pm_idle)();
> 
> Please take the opportunity and drop the pointless parentheses
> and indirection.
> 
Ok.

> > --- a/xen/common/tasklet.c
> > +++ b/xen/common/tasklet.c
> > @@ -104,19 +104,11 @@ static void do_tasklet_work(unsigned int cpu,
> > struct list_head *list)
> >  }
> >  
> >  /* VCPU context work */
> > -void do_tasklet(void)
> > +void do_tasklet(unsigned int cpu)
> >  {
> > -    unsigned int cpu = smp_processor_id();
> 
> I'm not convinced it is a good idea to have the caller pass in the
> CPU
> number. 
>
Yes, I know. I couldn't make up my mind about it either. I guess I get
get rid of this aspect of the patch.

> >      unsigned long *work_to_do = &per_cpu(tasklet_work_to_do, cpu);
> >      struct list_head *list = &per_cpu(tasklet_list, cpu);
> >  
> > -    /*
> > -     * Work must be enqueued *and* scheduled. Otherwise there is
> > no work to
> > -     * do, and/or scheduler needs to run to update idle vcpu
> > priority.
> > -     */
> > -    if ( likely(*work_to_do !=
> > (TASKLET_enqueued|TASKLET_scheduled)) )
> > -        return;
> 
> Perhaps it also wouldn't hurt to convert this to an ASSERT() too.
> 
Nope, I can't. It's a best effort check, and *work_to_do (which is
per_cpu(tasklet_work_to_do,cpu)) can change, and the assert may fail.

The code is, of course, safe, because, if we think that there's work
but there's not, the list of pending tasklets will be empty (and we
check that after taking the tasklet lock).

> > --- a/xen/include/xen/tasklet.h
> > +++ b/xen/include/xen/tasklet.h
> > @@ -40,9 +40,19 @@ DECLARE_PER_CPU(unsigned long,
> > tasklet_work_to_do);
> >  #define TASKLET_enqueued   (1ul << _TASKLET_enqueued)
> >  #define TASKLET_scheduled  (1ul << _TASKLET_scheduled)
> >  
> > +static inline bool tasklet_work_to_do(unsigned int cpu)
> > +{
> > +    /*
> > +     * Work must be enqueued *and* scheduled. Otherwise there is
> > no work to
> > +     * do, and/or scheduler needs to run to update idle vcpu
> > priority.
> > +     */
> > +    return per_cpu(tasklet_work_to_do, cpu) == (TASKLET_enqueued|
> > +                                                TASKLET_scheduled)
> > ;
> > +}
> 
> Wouldn't cpu_is_haltable() then also better use this new function?
> 
Mmm... Perhaps. It's certainly less code chrun.

ARM code would then contain two invocations of cpu_is_haltable() (the
first happens with IRQ enabled, so a second one with IRQ disabled is
necessary). But that is *exactly* the same thing we do on x86 (they're
just in different functions in that case).

So, I reworked the patch according to these suggestions, and you can
look at it below.

If you like it better, I'm ok re-submitting it properly in this shape.
Other thoughts anyone else?

Thanks and Regards,
Dario
---
NOTE that, since we call do_tasklet() after having checked
cpu_is_haltable(), the if in there is not likely any longer.
---
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 76310ed..86cd612 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -41,20 +41,28 @@ DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 
 void idle_loop(void)
 {
+    unsigned int cpu = smp_processor_id();
+
     for ( ; ; )
     {
-        if ( cpu_is_offline(smp_processor_id()) )
+        if ( cpu_is_offline(cpu) )
             stop_cpu();
 
-        local_irq_disable();
-        if ( cpu_is_haltable(smp_processor_id()) )
+        /* Are we here for running vcpu context tasklets, or for idling? */
+        if ( cpu_is_haltable(cpu) )
         {
-            dsb(sy);
-            wfi();
+            local_irq_disable();
+            /* We need to check again, with IRQ disabled */
+            if ( cpu_is_haltable(cpu) )
+            {
+                dsb(sy);
+                wfi();
+            }
+            local_irq_enable();
         }
-        local_irq_enable();
+        else
+            do_tasklet();
 
-        do_tasklet();
         do_softirq();
         /*
          * We MUST be last (or before dsb, wfi). Otherwise after we get the
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 49388f4..c520fdd 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -112,12 +112,18 @@ static void play_dead(void)
 
 static void idle_loop(void)
 {
+    unsigned int cpu = smp_processor_id();
+
     for ( ; ; )
     {
-        if ( cpu_is_offline(smp_processor_id()) )
+        if ( cpu_is_offline(cpu) )
             play_dead();
-        (*pm_idle)();
-        do_tasklet();
+
+        /* Are we here for idling, or for running vcpu context tasklets? */
+        if ( cpu_is_haltable(cpu) )
+            pm_idle();
+        else
+            do_tasklet();
         do_softirq();
         /*
          * We MUST be last (or before pm_idle). Otherwise after we get the
diff --git a/xen/common/tasklet.c b/xen/common/tasklet.c
index 365a777..ebdce12 100644
--- a/xen/common/tasklet.c
+++ b/xen/common/tasklet.c
@@ -114,7 +114,7 @@ void do_tasklet(void)
      * Work must be enqueued *and* scheduled. Otherwise there is no work to
      * do, and/or scheduler needs to run to update idle vcpu priority.
      */
-    if ( likely(*work_to_do != (TASKLET_enqueued|TASKLET_scheduled)) )
+    if ( *work_to_do != (TASKLET_enqueued|TASKLET_scheduled) )
         return;
 
     spin_lock_irq(&tasklet_lock);
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.