[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Question about the credit2 sources in xen-unstable tree.
Hi, all I am analyzing the credit2 scheduler in order to adopt to my own private cloud, which should satisfy the conditions of some latency sensitive applications. In the latest xen-unstable repository, which is tip tags, I found some codes that is not understandable to me. In the xen/common/sched_credit2.c, /* How long should we let this vcpu run for? */ static s_time_t csched_runtime(const struct scheduler *ops, int cpu, struct csched_vcpu *snext) { s_time_t time = CSCHED_MAX_TIMER; struct csched_runqueue_data *rqd = RQD(ops, cpu); struct list_head *runq = &rqd->runq; if ( is_idle_vcpu(snext->vcpu) ) return CSCHED_MAX_TIMER; /* Basic time */ time = c2t(rqd, snext->credit, snext); /* Next guy on runqueue */ if ( ! list_empty(runq) ) { struct csched_vcpu *svc = __runq_elem(runq->next); s_time_t ntime; if ( ! is_idle_vcpu(svc->vcpu) ) { ntime = c2t(rqd, snext->credit - svc->credit, snext); if ( time > ntime ) time = ntime; } } /* Check limits */ if ( time < CSCHED_MIN_TIMER ) time = CSCHED_MIN_TIMER; else if ( time > CSCHED_MAX_TIMER ) time = CSCHED_MAX_TIMER; return time; } As I understand, this function is used to determining the next vcpu's running time-slice from now to the next scheduling event. So, what does this code mean? if ( time > ntime ) time = ntime; actually, I think it is not different as if ( svc->credit > 0 ) time = ntime; If so, It means that if credits of the next guy on runqueue are not negative caculated time-slice is snext's time, or time of the next guy is selected and then eventually saturated by the CSCHED_MIN_TIMER. what is the implication of it? Am I correct ? If not, would you please kindly let me know about the mecahnism of caculating the next-time slice? Thanks -- Eunbyung Park _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |