[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/4] xen: add real time scheduler rt



2014-09-11 4:44 GMT-04:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>:
> On Tue, 2014-09-09 at 14:21 -0400, Meng Xu wrote:
>> >> +/*
>> >> + * Debug related code, dump vcpu/cpu information
>> >> + */
>> >> +static void
>> >> +rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc)
>> >> +{
>> >> +    struct rt_private *prv = RT_PRIV(ops);
>> >> +    char cpustr[1024];
>> >> +    cpumask_t *cpupool_mask;
>> >> +
>> >> +    ASSERT(svc != NULL);
>> >> +    /* flag vcpu */
>> >> +    if( svc->sdom == NULL )
>> >> +        return;
>> >> +
>> >> +    cpumask_scnprintf(cpustr, sizeof(cpustr), 
>> >> svc->vcpu->cpu_hard_affinity);
>> >> +    printk("[%5d.%-2u] cpu %u, (%"PRI_stime", %"PRI_stime"),"
>> >> +           " cur_b=%"PRI_stime" cur_d=%"PRI_stime" last_start=%"PRI_stime
>> >> +           " onR=%d runnable=%d cpu_hard_affinity=%s ",
>> >>
>> > How does this come up in the console? Should we break it with a '\n'
>> > somewhere? It looks rather long...
>>
>> Some information is not so useful here, such as the period and budget
>> of the vcpu, which can be displayed by using the tool stack. I can
>> remove some of them to make this line shorter. I will remove
>> svc->budget, svc->period and prv->cpus.
>>
> Well, as you wish... A '\n' (and perhaps some more formatting with
> '\t'-s, etch) would be fine too, IMO.

Got it! Thanks!

>
>> >> +    schedule_lock = per_cpu(schedule_data, 
>> >> svc->vcpu->processor).schedule_lock;
>> >> +    ASSERT( spin_is_locked(schedule_lock) );
>> >> +
>> > As of now, the only lock around is prv->lock, isn't it? So this
>> > per_cpu(xxx) is a complex way to get to prv->lock, or am I missing
>> > something.
>>
>> Yes. It's the only lock right now. When I split the RunQ to two
>> queues: RunQ, DepletedQ, I can still use one lock, (but probably two
>> locks are more efficient?)
>>
>> >
>> > In credit, the pre-inited set of locks are actually used "as they are",
>> > while in credit2, there is some remapping going on, but there is more
>> > than one lock anyway. That's why you find things like the above in those
>> > two schedulers. Here, you should not need anything like that, (as you do
>> > everywhere else) just go ahead and use prv->lock.
>> >
>> > Of course, that does not mean you don't need the lock remapping in
>> > rt_alloc_pdata(). That code looks ok to me, just adapt this bit above,
>> > as, like this, it makes things harder to understand.
>> >
>> > Or am I overlooking something?
>>
>> I think you didn't overlook anything. I will refer to credit2 to see
>> how it is using multiple locks, since it's likely we will have two
>> locks here.
>>
> I don't think you do. I mentioned credit2 only to make it clear why
> notation like the one above is required there, and to highlight that it
> is _not_ required in your case.
>
> Even if you start using 2 queues, one for runnable and one for depleted
> vcpus, access to both can well be serialized by the same lock. In fact,
> in quite a few places, you'd need moving vcpus from one queue to the
> other, i.e., you'd be forced to take both of the locks anyway.
>
> I do think that using separate queues may improve scalability, and
> adopting a different locking strategy could make that happen, but I just
> won't do that right now, at this point of the release cycle. For now,
> the two queue approach will "just" make the code easier to read,
> understand and hack, which is already something really important,
> especially for an experimental feature.
>
> So, IMO, just replace the line above with a simple "&prv->lock" and get
> done with it, without adding any more locks, or changing the locking
> logic.
>

I agree with you totally. Sure! I will use one lock then. :-)

Best,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.