[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 1/4] Implement cbs algorithm, remove extra queues, latency scaling, and weight support from sedf
Although very hard, I tried to have a look at the CBS implementation (so, some of the '+' hunks): On ven, 2014-06-13 at 15:58 -0400, Josh Whitehead wrote: > --- > @@ -410,49 +301,59 @@ static void desched_edf_dom(s_time_t now, struct vcpu* > d) > > __del_from_queue(d); > > - /* > - * Manage bookkeeping (i.e. calculate next deadline, memorise > - * overrun-time of slice) of finished domains. > - */ > +#ifdef SEDF_STATS > + /* Manage deadline misses */ > + if ( unlikely(inf->deadl_abs < now) ) > + { > + inf->miss_tot++; > + inf->miss_time += inf->cputime; > + } > +#endif > + > + /* Manage overruns */ > if ( inf->cputime >= inf->slice ) > { > inf->cputime -= inf->slice; > - > - if ( inf->period < inf->period_orig ) > - { > - /* This domain runs in latency scaling or burst mode */ > - inf->period *= 2; > - inf->slice *= 2; > - if ( (inf->period > inf->period_orig) || > - (inf->slice > inf->slice_orig) ) > - { > - /* Reset slice and period */ > - inf->period = inf->period_orig; > - inf->slice = inf->slice_orig; > - } > - } > > /* Set next deadline */ > inf->deadl_abs += inf->period; > + > + /* Ensure that the cputime is always less than slice */ > + if ( unlikely(inf->cputime > inf->slice) ) > + { > +#ifdef SEDF_STATS > + inf->over_tot++; > + inf->over_time += inf->cputime; > +#endif > + > + /* Make up for the overage by pushing the deadline > + into the future */ > + inf->deadl_abs += ((inf->cputime / inf->slice) > + * inf->period) * 2; > + inf->cputime -= (inf->cputime / inf->slice) * inf->slice; > + } > Can you enlighten me a bit about the math here? I see what you're up to, but I'm not sure I understand the '*2'... > + /* Ensure that the start of the next period is in the future */ > + if ( unlikely(PERIOD_BEGIN(inf) < now) ) > + inf->deadl_abs += > + (DIV_UP(now - PERIOD_BEGIN(inf), > + inf->period)) * inf->period; > } > @@ -1100,62 +663,65 @@ static void sedf_wake(const struct scheduler *ops, > struct vcpu *d) > inf->block_tot++; > #endif > > - if ( unlikely(now < PERIOD_BEGIN(inf)) ) > - { > - /* Unblocking in extra-time! */ > - if ( inf->status & EXTRA_WANT_PEN_Q ) > + if ( sedf_soft(d) ) > + { > + /* Apply CBS rule > + * Where: > + * c == Remaining server slice == (inf->slice - cpu_time) > + * d == Server (vcpu) deadline == inf->deadl_abs > + * r == Wake-up time of vcpu == now > + * U == Server (vcpu) bandwidth == (inf->slice / inf->period) > + * > + * if c>=(d-r)*U ---> > + * (inf->slice - cputime) >= (inf->deadl_abs - now) * > inf->period > + * Well, I think it's rather: (inf->slice - cputime) >= (inf->deadl_abs - now) * (inf->slice / inf->period) It's only the comment that is wrong, though, the code is ok. > + * If true, push deadline back by one period and refresh slice, else > + * use current slice and deadline. > + */ > + if((inf->slice - inf->cputime) >= > + ((inf->deadl_abs - now) * (inf->slice / inf->period))) > { > You can shuffle this a bit more, and avoid the '/'. The condition above can be rewritten as: c >= (d-r) * (inf->slide/inf->period) i.e.: c * inf->period >= (d-r) * inf->slice and this, the code can be rewritten as: if ((inf->slice - inf->cputime) * inf->period >= (inf->deadl_abs - now) * inf->slice) which I think it's better. One may worry about the fact that the multiplication can overflow, but that's really unlikely, since all the involved time values are relative (i.e., remaining runtime, time to deadline, etc). Anyway, let's cross that bridge when we get to it. Regards, Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |