[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 0/4] Repurpose SEDF Scheduler for Real-time use

On 6/17/2014 10:44 AM, Dario Faggioli wrote:
> Hi everyone again,
> I had the chance to give a close look to the patches, so here I am. :-)
> First of all, thanks again Josh, Robbie, and Nate for this work!
> That being said...
Hello everyone, sorry we didn't respond to this conversation sooner, the last
couple weeks have been a bit busy around here with some other projects!

Thanks for your feedback on this Dario, I will comment on each area
individually, but many of the things you brought up were debates that we were
having internally here at DornerWorks and we finally decided to just put the
patch out and get a feel for what everyone else was thinking, which is exactly
what happened :-)  We do want to make every effort to not work in a vacuum and
make sure this patch series is a benefit to the Xen community as well as our own
work.  So all comments are most definitely welcome, and we plan to make any
changes necessary to line-up with what is decided is best for Xen.

> On ven, 2014-06-13 at 15:58 -0400, Josh Whitehead wrote:
>> NEED:
>> With the increased interest in embedded Xen, there is a need for a suitable
>> real-time scheduler.  The arinc653 scheduler currently only supports a
>> single core and has limited niche appeal, while the sedf scheduler is
>> widely consider deprecated and is currently a mess.  This patchset
>> repurposes the current sedf scheduler and adds a more capable and robust 
>> real-time scheduler suitable for embedded use to the Xen repertoire.
> That describes our current situation quite well. :-)
>> Repurposing of the sedf scheduler was accomplished by implementing the
>> Constant Bandwidth Server (CBS) algorithm (originally proposed by Dario
>> Faggioli) which is capable of properly handling mixed soft real-time and
>> hard real-time tasks (domains/vcpus) on the same system.
> So, as agreed via conf-call, and as implied by what you say yourself
> below (when mentioning the TBS), the solution is to have a working
> global EDF framework, on top of which we can potentially put multiple
> and different budgetting algorithms.
> For that exact reason, I'm not sure I'd go all the way down to SCHED_CBS
> and sched_cbs.c. In fact, even with the above 'big plan' in mind, I
> don't think renaming/substituting SEDF is that important at this stage.
> What we have now in SEDF is, basically, an outdated implementation of
> EDF with an hackish budgetting scheme on top of it.
> What this patchset is meant at is (although in RFC status) implementing
> EDF with a well known and working budgetting scheme on top of it.
> Therefore, I don't think we need to go through the renaming of the c
> files, the functions, the parameters, etc... just change the
> implementation!
Indeed this was something we debated as well.  I think the final argument that
made us decide to submit the patch with the name change was that we had modified
the SEDF to a significant enough level that if there was anyone still using SEDF
they would be confused by the changes.  However if we follow some of the other
comments by you, George, and Jan (using if-defs to protect things) this should
not be a problem and leaving the naming scheme as SEDF should not be an issue.
(I will respond to the last e-mail sent by George as well when I'm done with
this one)

> Consider that, at least at the libxl level, we committed to maintain a
> stable and compile time backward compatible API. Going through something
> like this would require a lot of trickery, in order to make the above
> true. 
> The great value I see in this series is that it is the first step in
> turning SEDF into something useful, and you don't need to change the
> name and kill the parameters for doing that.
> In fact, about the parameters:
>  * you' re adding a 'soft' param, but I think the existing 'extra' can
>    be used to mean just that, can't it? It seems to me they've got a
>    pretty compatible meaning, at least up to a certain extent (i.e.,
>    when extra=1 is used on a domain with a !0 budget and slice).
>  * 'latency', certainly there is no such thing as scaling U in the
>    original CBS algorithm, but it won't be too difficult to do that.
>    Perhaps as a subsequent step, i.e., stick a TODO about it around, for
>    now, but just don't kill it!
>  * 'weight' is the only one I'm afraid about, as that was meant to be
>    useful when SEDF was the default scheduler (so used for general
>    purpose workloads as well). That's why I'm asking other people what
>    the constraints of API stability are in this situation. However, even
>    if we en up being stuck with it, I've got a few ideas on how to use
>    it in a similar enough way to the original meaning. I.e., I'd
>    recommend the same. Ignore it and put TODOs around, but don't get rid
>    of it.
Again we debated these name changes, the primary motivator was to update the
language of the parameters to reflect the current state of the scheduler and get
away from it's prior (current) deprecated/broken state.  I think some
"if-defery" as suggested by George and perhaps a "TODO" as you suggest may be
the best solution to updating the naming while maintaining backward 

> I won't comment on the details of the algorithm here. However, something
> similar to what you wrote in this cover letter, would be well suited for
> a document in docs/misc. I can contribute and help make it as clear and
> easy to understand as possible, of course.
> Also, something similar to what you put in the "USAGE INSTRUCTIONS",
> still of the cover letter, would fit very well in, still in docs/misc,
> the equivalent of sedf_scheduler_mini-HOWTO.txt (whether by replacing
> the content of such file, or killing it and creating a new one, I still
> don't know).
We would be happy to do this, it's always nice to find good documentation when
attempting to use a part of Xen!  This would also let people know the intent of
this scheduler so that the people who donât need it, donât end up using it.

>> Though useful in its current state, there are a few additional features 
>> and modifications we'd like to make in the future:
>> 1) More efficient multiprocessor support that makes better use of
>> available cores.  This would likely be accomplished through the use of a
>> global EDF queue for all available PCPUs.  Though the current version
>> does recognize and use multiple cores it takes some creative assigning
>> and pinning by the user to use them efficiently, the goal of this change
>> would be to have this load balancing occur seamlessly and automatically.
> Yep, we do need this. However, I agree that we want an incremental
> approach. What I was hoping was that, for turning the scheduler from
> local to global, we could borrow a few ideas from credit2 (e.g., 1
> runqueue per socket), and a few code from RT-Xen (and those guys are
> also close to submitting patches here on xen-devel).
We have some changes in the works that implement this, but we had reached the
point where we just wanted to get some discussion going on what the community
wanted before spending too much time building a "complete" solution that might
not fit the desires of all interested parties :-)

>> 2) VCPU level parameter assignments instead of the current domain level
>> assignments.  This would allow for more accurate and efficient division of
>> resources amongst domains and cpu pools.  Currently all vcpus within a
>> domain must use the same period and budget parameters.
> That would be good. It would be even better to go fully hierarchical,
> but yes, let's leave this to a later development.
Again we have plans for this that should be easy to implement, and don't think
it would take much time, but we wanted more input before continuing blindly.  I
believe I've heard you mention hierarchical assignments before- could you give a
little more detail on exactly what you mean by that and maybe what you would
expect that to look like in an actual implementation?  What do you see as the
advantages to that over simple individual VCPU level assignments?

>> 3) Alternate scheduling algorithms (e.g. Total Bandwidth Server) could now
>> be implemented in this scheduler with relative ease given the
>> simplification of the underlying EDF scheduler that was done as part of
>> this patchset.  This would further expand the capabilities of Xen in
>> embedded real-time systems and give developers more options to fit their
>> individual system requirements.
> Indeed. And I certainly am all for providing different server algorithms
> and implementations (as said above already). At some point, and perhaps
> off-list (as it may be too specific here), someone of you guys will have
> to explain what it is that you like in TBS better than in CBS. I mean,
> as far as I remember, CBS is an improved version of TBS, so why stick
> with the old one when we can have the new one... or is it that TBS is
> part of some standard/certification/whatever?
As discussed above our biggest argument FOR the name change was to avoid
confusion, probably our biggest argument AGAINST was that we are intending this
to be available in such a way that it would be easy to implement other
algorithms on top of it.  So leaving it as "sedf.c" instead would make that
point more plain.

In the current implementation, any vcpu/domain that is "hard real-time" (the
default for all new vcpus) is treated as a pure EDF- it gets its budget for the
period, no more, no less, and if it runs out of budget it has to wait until the
next period.  Only when the "soft" flag is set does the CBS algorithm come in to
play.  Because the underlying bare-bones EDF functionality is there it would
make it a simple matter to implement other budgeting algorithms and have them
toggled via command line parameters.

We have no special attachment to Total Bandwidth Server, that was just the first
example algorithm I happened to think of when I was writing the cover letter :-)
 The systems that we have in mind on which we would like to use Xen will in all
likelihood use CBS (or ARINC653 depending on the customer) as it is superior in
many ways.  We just wanted to make sure it was clear the patch left it open for
us or anyone else that might have a need for a different algorithm in the 

> Thanks again and Regards,
> Dario

Thanks again for the feedback, this is exactly what we were looking for.  I will
respond to your other comments in the other e-mails as well.

- Josh Whitehead

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.