|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 1/4] xen: add real time scheduler rtds
Hi George,
2014-09-18 12:08 GMT-04:00 George Dunlap <george.dunlap@xxxxxxxxxxxxx>:
> On 09/14/2014 10:37 PM, Meng Xu wrote:
>>
>> This scheduler follows the Preemptive Global Earliest Deadline First
>> (EDF) theory in real-time field.
>> At any scheduling point, the VCPU with earlier deadline has higher
>> priority. The scheduler always picks the highest priority VCPU to run on a
>> feasible PCPU.
>> A PCPU is feasible if the VCPU can run on this PCPU and (the PCPU is
>> idle or has a lower-priority VCPU running on it.)
>>
>> Each VCPU has a dedicated period and budget.
>> The deadline of a VCPU is at the end of each period;
>> A VCPU has its budget replenished at the beginning of each period;
>> While scheduled, a VCPU burns its budget.
>> The VCPU needs to finish its budget before its deadline in each period;
>> The VCPU discards its unused budget at the end of each period.
>> If a VCPU runs out of budget in a period, it has to wait until next
>> period.
>>
>> Each VCPU is implemented as a deferable server.
>> When a VCPU has a task running on it, its budget is continuously burned;
>> When a VCPU has no task but with budget left, its budget is preserved.
>>
>> Queue scheme:
>> A global runqueue and a global depletedq for each CPU pool.
>> The runqueue holds all runnable VCPUs with budget and sorted by deadline;
>> The depletedq holds all VCPUs without budget and unsorted.
>>
>> Note: cpumask and cpupool is supported.
>>
>> This is an experimental scheduler.
>>
>> Signed-off-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
>> Signed-off-by: Sisu Xi <xisisu@xxxxxxxxx>
>
>
> Getting there, but unfortunately I've got a number more comments.
>
> Konrad, I think this is very close to being ready -- when is the deadline
> again, and how hard is it? Would it be better to check it in before the
> deadline, and then address the things I'm bringing up here? Or would it be
> better to wait until all the issues are sorted and then check it in (even if
> it's after the deadline)?
>
>
> +/*
> + * Debug related code, dump vcpu/cpu information
> + */
> +static void
> +rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc)
> +{
> + char cpustr[1024];
> + cpumask_t *cpupool_mask;
> +
> + ASSERT(svc != NULL);
> + /* flag vcpu */
> + if( svc->sdom == NULL )
> + return;
> +
> + cpumask_scnprintf(cpustr, sizeof(cpustr),
> svc->vcpu->cpu_hard_affinity);
> + printk("[%5d.%-2u] cpu %u, (%"PRI_stime", %"PRI_stime"),"
> + " cur_b=%"PRI_stime" cur_d=%"PRI_stime"
> last_start=%"PRI_stime"\n"
> + " \t\t onQ=%d runnable=%d cpu_hard_affinity=%s ",
> + svc->vcpu->domain->domain_id,
> + svc->vcpu->vcpu_id,
> + svc->vcpu->processor,
> + svc->period,
> + svc->budget,
> + svc->cur_budget,
> + svc->cur_deadline,
> + svc->last_start,
> + __vcpu_on_q(svc),
> + vcpu_runnable(svc->vcpu),
> + cpustr);
> + memset(cpustr, 0, sizeof(cpustr));
> + cpupool_mask = cpupool_scheduler_cpumask(svc->vcpu->domain->cpupool);
> + cpumask_scnprintf(cpustr, sizeof(cpustr), cpupool_mask);
> + printk("cpupool=%s\n", cpustr);
> +}
> +
> +static void
> +rt_dump_pcpu(const struct scheduler *ops, int cpu)
> +{
> + struct rt_vcpu *svc = rt_vcpu(curr_on_cpu(cpu));
> +
> + rt_dump_vcpu(ops, svc);
> +}
>
>
> These svc structures are allocated dynamically and may disappear at any
> time... I think you need the lock to cover this as well.
>
> And since this is called both externally (via ops->dump_cpu_state) and
> internally (below), you probably need a locking version and a non-locking
> version (normally you'd make rt_dump_pcpu() a locking "wrapper" around
> __rt_dump_pcpu()).
I tried to add the schedule lock (prv->lock) here, which causes system
freeze. The reason is because when this function is called by
schedule_dump() in schedule.c, the lock has already been grabbed. So I
don't need the lock here, IMO. :-)
void schedule_dump(struct cpupool *c)
{
int i;
struct scheduler *sched;
cpumask_t *cpus;
sched = (c == NULL) ? &ops : c->sched;
cpus = cpupool_scheduler_cpumask(c);
printk("Scheduler: %s (%s)\n", sched->name, sched->opt_name);
SCHED_OP(sched, dump_settings);
for_each_cpu (i, cpus)
{
spinlock_t *lock = pcpu_schedule_lock(i);
printk("CPU[%02d] ", i);
SCHED_OP(sched, dump_cpu_state, i);
pcpu_schedule_unlock(lock, i);
}
}
>
>> +
>> +static void
>> +rt_dump(const struct scheduler *ops)
>> +{
>> + struct list_head *iter_sdom, *iter_svc, *runq, *depletedq, *iter;
>> + struct rt_private *prv = rt_priv(ops);
>> + struct rt_vcpu *svc;
>> + unsigned int cpu = 0;
>> + cpumask_t *online;
>> + struct rt_dom *sdom;
>> + unsigned long flags;
>> +
>> + ASSERT(!list_empty(&prv->sdom));
>> + sdom = list_entry(prv->sdom.next, struct rt_dom, sdom_elem);
>> + online = cpupool_scheduler_cpumask(sdom->dom->cpupool);
>> + runq = rt_runq(ops);
>> + depletedq = rt_depletedq(ops);
>
>
> Same thing with all these -- other CPUs may be modifying prv->sdom.
>
I need lock here because when this function is called, it has no lock
protection, as you saw above:
SCHED_OP(sched, dump_settings);
is when this function is called.
So I add a lock here for rt_dump() but not rt_dump_pcpu().
Thanks,
Meng
-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |