[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/1] ratelimit: Implement rate limit for credit2 scheduler Rate limit assures that a vcpu will execute for a minimum amount of time before being put at the back of a queue or being preempted by higher priority thread.



On 22/07/16 11:36, Dario Faggioli wrote:
On Mon, 2016-07-18 at 13:22 +0100, Anshul Makkar wrote:

Hey, Anshul.

Thanks, and sorry for the delay in reviewing.

This version is an improvement, but it looks to me that you've missed a
few of the review comments to v1.
Sorry about that. !!
It introduces a minimum amount of latency

"introduces context-switch rate-limiting"
diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 8b95a47..68bcdb8 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c

@@ -1601,6 +1602,34 @@ csched2_dom_cntl(
+    switch (sc->cmd )
+    {
+        case XEN_SYSCTL_SCHEDOP_putinfo:
+            if ( params->ratelimit_us &&
+                ( params->ratelimit_us < CSCHED2_MIN_TIMER ||
+                  params->ratelimit_us >
I remember saying already (although, it may have be in pvt, not on this
list) that I think we should just use XEN_SYSCTL_SCHED_RATELIMIT_MAX
and XEN_SYSCTL_SCHED_RATELIMIT_MIN here.

CSCHED2_MIN_TIMER and CSCHED2_MAX_TIMER are internal implementation
details, and I don't like them exposed (although, indirectly) to the
user.
addressed.
+                return rc;
+            spin_lock_irqsave(&prv->lock, flags);
+
This is ok. However, the code base changed in the meanwhile (sorry! :-
P), and this spin_lock_irqsave() needs to become a
write_lock_irqsave().
done.

Mmm... if you wanted to implement my suggestion from
<1468400021.13039.33.camel@xxxxxxxxxx>, you're definitely missing
something:

      s_time_t ratelimit_min = prv->ratelimit_us;
      if ( snext->vcpu->is_running )
          ratelimit_min = snext->vcpu->runstate.state_entry_time +
                          MICROSECS(prv->ratelimit_us) - now;

yes, missed the if part for checking if the vcpu is currently running.
In fact, you're initializing ratelimit_min and then immediately
overriding that... I'm surprised the compiler didn't complain.

+    if ( ratelimit_min > min_time )
+        min_time = ratelimit_min;
+    }
+

@@ -1707,32 +1749,33 @@ csched2_runtime(const struct scheduler *ops,
int cpu, struct csched2_vcpu *snext
          }
      }


@@ -1746,7 +1789,7 @@ void __dump_execstate(void *unused);
  static struct csched2_vcpu *
  runq_candidate(struct csched2_runqueue_data *rqd,
                 struct csched2_vcpu *scurr,
-               int cpu, s_time_t now)
+               int cpu, s_time_t now, struct csched2_private *prv)

Reviewing v1, George said this:

   Since we have the cpu, I think we can get ops this way, without
   cluttering things up with the extra argument:

       struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
yes, missed that change too. Addressed in v3.

@@ -1775,9 +1829,13 @@ runq_candidate(struct csched2_runqueue_data
*rqd,
          }

          /* If the next one on the list has more credit than current
-         * (or idle, if current is not runnable), choose it. */
+         * (or idle, if current is not runnable) and current one has
already
+         * executed for more than ratelimit. choose it.
+         * Control has reached here means that current vcpu has
executed >
+         * ratelimit_us or ratelimit is off, so chose the next one.
+         */
          if ( svc->credit > snext->credit )
-            snext = svc;
+                snext = svc;

Both me and George agreed that changing the comment like this is not
helping much and should not be done.
Though, I find the extended comment useful, but if you don't agree I will remove it v3.


Regards,
Dario

Anshul



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.