[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 3/4] Updated comments/variables to reflect cbs, fixed formatting and confusing comments/variables



On 6/17/2014 1:28 PM, Dario Faggioli wrote:
> [Adding RT-Xen people to the discussion, as well as Lars and Andrii from
> GlobalLogic]
> 
> On mar, 2014-06-17 at 18:11 +0200, Dario Faggioli wrote:
>> On lun, 2014-06-16 at 16:29 +0100, George Dunlap wrote:
> 
>> So, I'm asking, mostly to George, about the overall scheduling aspects
>> and implications, and to the tools maintainer, as that's where API
>> stability is to be enforced: should this be a concern? In what sense API
>> stability applies here? Can we, for example, start to ignore one or more
>> SEDF scheduling parameters?
>>
>> I'm asking explicitly about the parameters because, although I think
>> that most of the changes in this series does not actually call for much
>> renaming, at least the 'weight' and, to certain extent the 'extra',
>> parameters are a bit difficult to deal with (mostly because they're a
>> remnant from when SEDF was meant as a general purpose scheduler too!).
>>
>> Thoughts?
>>
> Related to this, there are basically two groups of people working on
> real-time scheduling:
> 
>  1) Josh and Robbie @ Dornerworks (+ Nate), working on fixing SEDF;
> 
>  2) RT-Xen developers, working at isolating the upstreamable bits of 
>     RT-Xen itself (although they've not sent patches here in public
>     yet).
> 
> The very nice part of 1) is that it (in the long run) fixes SEDF, and if
> we are to keep it in the tree, I really think it should be fixed!
> 
> The very nice part of 2) is that it is already a more advanced and
> mature real-time scheduling solution (for example, it already deals with
> SMPs correctly, unlike current SEDF and Josh's RFC), but it really does
> not have much to do with SEDF (either broken or fixed), both at an
> interface and implementation (i.e., code sharing) points of view either.
> 
> So, again, depending on whether or not we want to keep SEDF, and how
> hard we want for it's interface not only to compile, but to remain
> meaningful, the way forward changes. That's why I'm asking what we want
> to do with SEDF. :-/
> 
> 
> Assuming that we *do* want to keep it, my personally preferred way to
> proceed would be:
>  - we take the implementation changes, but not the renaming, from Josh's
>    effort
>  - we merge the sched_sedf.c resulting from above with sched_rtglobal.c
>    from RT-Xen, so to have a solution that works well on SMP systems 
>    and also implements the existing SEDF interface
> 
> The merge can happen 'either way', i.e., we can try to borrow the global
> scheduling bits (i.e., the SMP support) from RT-Xen's sched_rtglobal.c
> into sched_sedf.c or, vice-versa, import Josh's SEDF/CBS code inside
> sched_rtglobal.c... In either case, I'd need Josh and Meng, and their
> respective fellows, to collaborate... Are you guys up for that? :-)
> 
Reading over this again in combination with some of the other discussion that
has gone on (and having seen the e-mails from Meng/RT-Xen, thanks for that!) we
think this is probably a good solution.

To this end I think our next version of the patch series will focus on the
reorganization of the patch order as discussed with the separation between the
various areas.  We will keep our cuts to SEDF and our implementation details on
CBS, but we'll keep the the renaming/parameter changes very limited to only
what's necessary to properly reflect the state of the scheduler. (Similar to
what George proposed in his last e-mail, i.e. changing the scheduler and
interface in place rather than introducing a new scheduler entirely)

At a minimum this would put the series in a good state so that, if it were
upstreamed, RT-Xen and/or DornerWorks could easily patch against the scheduler
to add the additional features discussed.  These patches could then come
directly from RT-Xen, DornerWorks, or some other hybrid of RT-Xen and
DornerWorks code, all while occurring incrementally and leaving a working
version of the scheduler in the tree at all times.

Everyone let us know what you think, we'll get to work on our V2 to get that out
soon to give everyone a better idea of where that puts us.  Thanks,

- Josh Whitehead


> Regards,
> Dario
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.