[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [RFC PATCH v2 0/7] Repurpose SEDF Scheduler for Real-time Use

This patch series has been updated in response to the discussion on the previous
patch series.  The first patch was split in to three parts to make it easier to
review; the first patch now only removes unneeded code, the 2nd updates some
formatting and variable names, and the 3rd only adds the new functionality.

Additionally the other patches modifying the toolstack have been divided to more
logically coincide (as best as we were able to determine) with the various
pieces of the toolstack.  As a side note: Would it be possible to specify,
perhaps in the MAINTAINERS file, where (which files) the split should occur for
each piece of the toolstack?  We used what was mentioned in the discussion to
make the division, howeve having that documented somewhere would be very helpful
in the future.

Also in regard to previous discussion, attempts have been made to protect the
interface versioning through the use of if-defs etc. More likely needs to be
done with the versioning, please let us know.  We have tried to indicate which
files we most unsure of in the change-logs for each patch.

Finally, the amount of name changes has been significantly reduced, the most
prominent being to keep the "sedf" name and not switch to a new name as was
also decided through previous discussion.

Hopefully these changes have made the series easier to review, we look forward
to everyone's comments and suggestions.

Below is the original patch cover letter (with minor updates to reflect the lack
of name changes) for reference:


With the increased interest in embedded Xen, there is a need for a suitable
real-time scheduler.  The arinc653 scheduler currently only supports a
single core and has limited niche appeal, while the sedf scheduler is
widely consider deprecated and is currently a mess.  This patchset
repurposes the current sedf scheduler and adds a more capable and robust 
real-time scheduler suitable for embedded use to the Xen repertoire.

Repurposing of the sedf scheduler was accomplished by implementing the
Constant Bandwidth Server (CBS) algorithm (as originally proposed for us in
Xen by Dario Faggioli) which is capable of properly handling mixed soft 
real-time and hard real-time tasks (domains/vcpus) on the same system.

If all domains in the system are set as hard real-time domains (the default
behavior) the scheduler acts as a very simple EDF scheduler, akin to the
original version of the SEDF scheduler.  If the "soft" flag is set for a
domain it enables the CBS algorithm to set the deadlines of a domain. The
primary advantage to this algorithm being that it allows a domain/vcpu to
consume additional CPU resources when they're available (work conserving)
while not affecting the deadlines of other domains (temporal isolation) but
still enforcing the domain receives a guaranteed minimum level of service.

This functionality allows hard and soft tasks to interact in a safe and
predictable fashion (one type is not adversely impacting the performance of
the other), while still making efficient use of all available system 
resources, i.e. unlike a pure EDF system there is little or no wasted time
where the CPU is sitting idle.

A full discussion of the Constant Bandwidth Server algorithm and its
behavior is located here: 

After applying the patches the scheduler can be used just as before by 
specifying the "sched=sedf" boot parameter.  Though capable of booting Dom0
with "sched=sedf", the scheduler is best for testing and development purposes
in a separate CPU pool.  This is because the scheduler is being targeted at
embedded real-time systems.  Performance as a general purpose operating system
would be good, though noticeably less than the current credit schedulers.
To achieve its full potential the CBS scheduler would need to be applied to a
specific real-time system using a real-time OS where the system designer had
some foreknowledge of that systems workload.

CBS uses a resource reservation scheme where the resources provided to each
domain/vcpu are specified in milliseconds using the "period=" and "budget="
parameters in the domain's config file, or at the command line using the
"xl sched-sedf" command.  The "bandwidth" or percentage of available
processing power that is guaranteed to a domain is simply the ratio of
budget / period.  The total of these ratios for all domains being operated
by the CBS scheduler should never be >1.  Currently there are no checks
implemented preventing a user from specifying a total of ratios >1, but
failure to follow this rule will likely result in the domains in question
becoming unstable and crashing as this would be akin to specifying more
than 100% of the total available processing power.

As an example, let's look at a setup of 4 CBS operated domains: 1 hard RT
and 3 soft RT, each with a period of 40 and a budget of 10.  Each domain
has a bandwidth of 0.25 (10/40), which with 4 domains means our total
bandwidth is 1.0 meeting that requirement.  In this scenario, the hard RT
domain would be guaranteed 25% of the CPU time, but would also never be
able to use more than that.  The 3 soft RT domains would be able to use
up to 100% of the CPU depending on system load across all the domains,
but would each be guaranteed to receive 25% of the processor even under
fully loaded conditions.  While the 3 soft RT domains could make use of
all of the CPU, the CBS algorithm prevents them from causing the hard RT
domain to miss any deadlines.

Instructions for using the "xl sched-sedf" command and modifying a domains
period and budget parameters are located in the xl man page.

Though useful in its current state, there are a few additional features 
and modifications we'd like to make in the future:

1) More efficient multiprocessor support that makes better use of
available cores.  This would likely be accomplished through the use of a
global EDF queue for all available PCPUs.  Though the current version
does recognize and use multiple cores it takes some creative assigning
and pinning by the user to use them efficiently, the goal of this change
would be to have this load balancing occur seamlessly and automatically.

2) VCPU level parameter assignments instead of the current domain level
assignments.  This would allow for more accurate and efficient division of
resources amongst domains and cpu pools.  Currently all vcpus within a
domain must use the same period and budget parameters.

3) Alternate scheduling algorithms (e.g. Deferrable Server) could now
be implemented in this scheduler with relative ease given the
simplification of the underlying EDF scheduler that was done as part of
this patchset.  This would further expand the capabilities of Xen in
embedded real-time systems and give developers more options to fit their
individual system requirements.

Josh Whitehead (3):
  Removed all code from sedf not needed for basic EDF functionality
  Fixed formatting and misleading comments/variables. Added comments
    and renamed variables to accurately reflect modern terminology
  Added constant bandwidth server functionality to sedf scheduler

Robbie VanVossen (4):
  Add cbs parameter support and removed sedf parameters from libxc
  Add cbs parameter support and removed sedf parameters with a    
    LIBXL_API_VERSION gate from libxl.
  Changed slice to budget in libxc for the sedf scheduler
  Changed slice to budget in libxl for the sedf scheduler

 docs/man/xl.cfg.pod.5             |   13 +-
 tools/libxc/xc_sedf.c             |   24 +-
 tools/libxc/xenctrl.h             |   12 +-
 tools/libxl/libxl.c               |   36 +-
 tools/libxl/libxl.h               |    2 +
 tools/libxl/libxl_create.c        |   61 --
 tools/libxl/libxl_types.idl       |    2 +
 tools/libxl/xl_cmdimpl.c          |   72 +++
 tools/libxl/xl_cmdtable.c         |    7 +
 tools/python/xen/lowlevel/xc/xc.c |   42 +-
 xen/common/sched_sedf.c           | 1280 ++++++++++---------------------------
 xen/include/public/domctl.h       |    6 +-
 12 files changed, 487 insertions(+), 1070 deletions(-)


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.