[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] About vcpu wakeup and runq tickling in credit



On Tue, 2012-10-23 at 16:16 +0100, George Dunlap wrote:
> > If yes, here's my question. Is that right to always tickle v_W's affine
> > CPUs and only them?
> >
> > I'm asking because a possible scenario, at least according to me, is
> > that P schedules very quickly after this and, as prio(v_W)>prio(v_C), it
> > selects v_W and leaves v_C in its runq. At that point, one of the
> > tickled CPU (say P') enters schedule, sees that P is not idle, and tries
> > to steal a vcpu from its runq. Now we know that P' has affinity with
> > v_W, but v_W is not there, while v_C is, and if P' is not in its
> > affinity, we've forced P' to reschedule for nothing.
> > Also, there now might be another (or even a number of) CPU where v_C
> > could run that stays idle, as it has not being tickled.
> 
> Yes -- the two clauses look a bit like they were conceived 
> independently, and maybe no one thought about how they might interact.
> 
Yep, it looked the same to me.

> > As it comes to possible solution, I think that, for instance, tickling
> > all the CPUs in both v_W's and v_C's affinity masks could solve this,
> > but that would also potentially increase the overhead (by asking _a_lot_
> > of CPUs to reschedule), and again, it's hard to say if/when it's
> > worth...
> 
> Well in my code, opt_tickle_idle_one is on by default, which means only 
> one other cpu will be woken up.  If there were an easy way to make it 
> wake up a CPU in v_C's affinity as well (supposing that there was no 
> overlap), that would probably be a win.
> 
Yes, default is to tickle only 1 idler. However, as we offer that as a
command line option, I think we should consider what could happen if one
disables it.

I double checked this on Linux and, mutatis mutandis, they sort of go
the way I was suggesting, i.e., "pinging" the CPUs with affinity to the
task that will likely stay in the runq rather than being picked up
locally. However, there are of course big difference between the two
schedulers and different assumptions being made, thus I'm not really
sure of that being the best thing to do for us.

So, yes, it probably make sense to think about something clever to try
to involve CPUs from both the masks without causing too much overhead.
I'll put that in my TODO List. :-)

> Of course, that's only necessary if:
> * v_C is lower priority than v_W
> * There are no idlers that intersect both v_C and v_W's affinity mask.
> 
Sure, I said that in the first place, and I don't think checking for
that is too hard... Just a couple of more bitmap ops. But again, I'll
give it some thinking.

> It's probably a good idea though to try to set up a scenario where this 
> might be an issue and see how often it actually happens.
> 
Definitely, before trying to "fix" it, I'm interested in finding out
what I'd be actually fixing. Will do that.

Thanks for taking time to check and answer this! :-)
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.