[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] Modified RTDS scheduler to use an event-driven model instead of polling.
Thanks for the feedback.
 I could run some experiments to measure this. An easier quick test may be to count invocations and run the same workload on each, making sure each run a long time to remove biases.  On Mon, 2015-06-08 at 07:46 -0400, Dagaen Golomb wrote:  a) is essentially what we have going on here. The list is exactly that, a list of possible scheduling change times (including replenishments as they are coalesced with period), one per vcpu, and the timer is triggered using the earliest one.  And note that, when I say "timer", I mean an actual Xen timer, i.e., I will look at this. However, the current solution is likely functionally equivalent and, with only one timer and a single list used to arm it, I'm not sure if using a Xen timer is necessary. Do they incur any extra overhead or coarsen granularity? Â
I agree b) would may be better in the long run, but with the current architecture a) is simpler. b) can be future work as the scheduler evolves. Â Performance and overhead wise, again, hard to tell in advance. b) would The timerq will also hold the end of period if that is the next relevant event, and if at head of the timerq this value will be used to arm the timer to let the task run until budget exhaustion. Â Does all this make sense? Am I making myself clear enough? I think once you understand that the timerq is not only replenishments, but any event that could cause a branch is the scheduler behavior, it becomes more palatable to have them intertwined. Really, the timerq and scheduling aren't as intertwined as they appear, they are piratically isolated functionally. Its just that the timerq is most naturally serviced during scheduling events when data that effects scheduling decisions is changing. Its straightforward and efficient. Let me know your thoughts on this. Â Also, the bits and pieces that you need to add in order to deal with While it does add some complexity, I don't feel a single queue and managing it is that much extra complexity. Â So, all this being said, let me know what you think about it (and that Noted. Note that its not as large as it seems, a large chunk of the deletions and an equal number of insertions include moving a function due to visibility to other functions. Â > diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c As mentioned, the timerq is handling all events that could change the scheduling decision, not just replenishments. This tracks the earliest time anything cause this to be scheduled differently, and could be extended if any more insights are found. Budget enforcement is being done in rt_schedule but its being done by this very list: a budget expiration is one possible event that would require a vcpu reschedule. Â
Sorry, I thought I had gotten them all changed to spaces. My editor wasn't configured correctly at first so some are lurking around. I will exterminate the remaining ones. Â >Â /* Can do. Â > +Â Â Âlist_for_each(iter, timerq) Again, sorry. Will exterminate. Â
Honestly, events do not have to be removed here, but its done to prevent walking a list of stale values to get to the newest one on the next call. This is done more for performance. Their non-removal would be functionally equivalent. With the timer alternative you mention, where would the timer events and their data be held? I think that could be extra complexity, unless I fail to understand the idea completely. The current approach is actually somewhat simple if you think about it, requiring just one piece of information per vcpu (the next_sched_needed field) which is updated as information is available. It also uses a guaranteed O(1) increase in memory with how the queue and next_sched_time is set up, and most computational parts are rolled into existing routines that need to be done anyways. MAX_SCHEDULE may not be required, but I have it there as an "in case." For example, it will cause the scheduler to be called after MAX_SCHEDULE even when no events are occurring (such as no vcpus). Its there to ensure progress in case of any bugs or unexpected behavior. Â > +/* I see what you mean here, but this doesn't seem bad to me. Right now its only these two events and a simple if-else, but what is great is that this method can be updated to include any number of tricks or insights that could help determine when something may need be scheduled or not. Right now it seems complex for what appears to be just two cases, but many more cases or an entirely different accounting method could be placed here and it would be fine. It makes for a nice "ground zero" for modifying the timer update behavior. Â
Ah, thanks for mentioning. The move was required but I didn't know a separate patch was customary. I can make it so. Â That being said, in this specific case, you're moving up runq_tickle(), It set up to only tickle if needed. I'm not sure if this is an issue, Â Again, with an approach that really take advantage of timers, this won't I think there could be some discussion on the approaches. Hopefully others comment as well. As mentioned this is actually a logically quite simple change, with good efficiency, and the groud zero makes changes using lots of information easy although right now it was kept simple to get the change out. Future improvements to the update function was somewhat expected as future work. Thanks for the comments. ~Dagaen _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |