[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v11]xen: sched: convert RTDS from time to event driven model



>
>> If I can already point you to something else that can be improved
>> further in this scheduler, _completely_ independently from this, and
>> hence in a fully new patch (series), can you have a look at whether we
>> really need set_bit(), test_and_clear(), etc, instead of __set_bit(),
>> __test_and_clear(), etc.?
>>
>> I'm asking because I'm under the impression that the latter are enough,
>> and if that is the case, we should go for them, as they're more
>> efficient.
>
>
> Sure. __set_bit() and __test_and_clear() are not atomic and can be
> reordered. They need to be protected by locks. So, it looks like where RTDS
> is using them already have spin_locks in place and re-ordering inside the
> critical region doesn't really mess anything up.
>
> rt_schedule() sets: __RTDS_scheduled, __RTDS_delay_runq_add
> burn_budget() sets: __RTDS_depleted
> rt_wake() sets: __RTDS_delay_runq_add
>
> rt_vcpu_sleep() clears: __RTDS_delayed_runq_add
> rt_context_saved() clears: __RTDS_scheduled, __RTDS_delayed_runq_add
> repl_timer_handler() clears: __RTDS_depleted
>
> burn_budget() is inside of rt_schedule() but they are using different flags
> of different vcpus. So it's enough to just use __set_bit(), __clear_bit()
> and __test_and_clear().
>
> I'm assuming the new patch should be based on the last patch I sent? Because
> replenishment handler is not presented in the old RTDS code?
>

Yes. You can work on the latest staging commit point and later rebase it.

BTW, I'm testing the patch tonight and will send out my Reviewed-by
tag once I tested it...

Thanks,

Meng


-- 
-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.