[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3 of 6 v2] xen: sched_credit: use current_on_cpu() when appropriate



On 12/12/12 02:52, Dario Faggioli wrote:
Defined by the previous change, using it wherever it is appropriate
throughout sched_credit.c makes the code easier to read.

Hmm, I hadn't read this patch when I commented about removing the macro from the first patch. :-)

I personally think that using vc->processor is better in that patch anyway; but using this macro elsewhere is probably fine.

I think from a taste point of view, I would have put this patch, with the new definition, as the first patch in the series, and the had the second patch just use it.

I'll go back and respond to Jan's comment about the macro.

 -George


Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -231,7 +231,7 @@ static void burn_credits(struct csched_v
      unsigned int credits;
/* Assert svc is current */
-    ASSERT(svc==CSCHED_VCPU(per_cpu(schedule_data, 
svc->vcpu->processor).curr));
+    ASSERT( svc == CSCHED_VCPU(current_on_cpu(svc->vcpu->processor)) );
if ( (delta = now - svc->start_time) <= 0 )
          return;
@@ -249,8 +249,7 @@ DEFINE_PER_CPU(unsigned int, last_tickle
  static inline void
  __runq_tickle(unsigned int cpu, struct csched_vcpu *new)
  {
-    struct csched_vcpu * const cur =
-        CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    struct csched_vcpu * const cur = CSCHED_VCPU(current_on_cpu(cpu));
      struct csched_private *prv = CSCHED_PRIV(per_cpu(scheduler, cpu));
      cpumask_t mask, idle_mask;
      int idlers_empty;
@@ -387,7 +386,7 @@ csched_alloc_pdata(const struct schedule
          per_cpu(schedule_data, cpu).sched_priv = spc;
/* Start off idling... */
-    BUG_ON(!is_idle_vcpu(per_cpu(schedule_data, cpu).curr));
+    BUG_ON(!is_idle_vcpu(current_on_cpu(cpu)));
      cpumask_set_cpu(cpu, prv->idlers);
spin_unlock_irqrestore(&prv->lock, flags);
@@ -730,7 +729,7 @@ csched_vcpu_sleep(const struct scheduler
BUG_ON( is_idle_vcpu(vc) ); - if ( per_cpu(schedule_data, vc->processor).curr == vc )
+    if ( current_on_cpu(vc->processor) == vc )
          cpu_raise_softirq(vc->processor, SCHEDULE_SOFTIRQ);
      else if ( __vcpu_on_runq(svc) )
          __runq_remove(svc);
@@ -744,7 +743,7 @@ csched_vcpu_wake(const struct scheduler
BUG_ON( is_idle_vcpu(vc) ); - if ( unlikely(per_cpu(schedule_data, cpu).curr == vc) )
+    if ( unlikely(current_on_cpu(cpu) == vc) )
      {
          SCHED_STAT_CRANK(vcpu_wake_running);
          return;
@@ -1213,7 +1212,7 @@ static struct csched_vcpu *
  csched_runq_steal(int peer_cpu, int cpu, int pri)
  {
      const struct csched_pcpu * const peer_pcpu = CSCHED_PCPU(peer_cpu);
-    const struct vcpu * const peer_vcpu = per_cpu(schedule_data, 
peer_cpu).curr;
+    const struct vcpu * const peer_vcpu = current_on_cpu(peer_cpu);
      struct csched_vcpu *speer;
      struct list_head *iter;
      struct vcpu *vc;
@@ -1502,7 +1501,7 @@ csched_dump_pcpu(const struct scheduler
      printk("core=%s\n", cpustr);
/* current VCPU */
-    svc = CSCHED_VCPU(per_cpu(schedule_data, cpu).curr);
+    svc = CSCHED_VCPU(current_on_cpu(cpu));
      if ( svc )
      {
          printk("\trun: ");


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.