[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Notes on stubdoms and latency on ARM

Hello Dario,
>> I'm not very familiar with XEN schedulers.
> Feel free to ask anything. :-)
I'm so unfamiliar, so even don't know what to ask :) But thank you.
Surely I'll have questions.

>> Looks like null scheduler
>> is good for hard RT, but isn't fine for a generic consumer system.
> The null scheduler is meant at being useful when you have a static
> scenario, no (or very few) overbooking (i.e., total nr of vCPUs ~= nr
> of pCPUS), and what to cut to _zero_ the scheduling overhead.
> That may include certain class of real-time workloads, but it not
> limited to such use case.
Can't I achieve the same with any other scheduler by pining one vcpu
to one pcpu?

>> How
>> do you think: is it possible to make credit2 scheduler to schedule
>> stubdoms in the same way?
> It is indeed possible. Actually, it's actually in the plans to do
> exactly something like that, as it could potentially be useful for a
> wide range of use cases.
> Doing it in the null scheduler is just easier, and we think it would be
> a nice way to quickly have a proof of concept done. Afterwards, we'll
> focus on other schedulers too.
>> Do you have any tools to profile or trace XEN core? Also, I don't
>> think that pure context switch time is the biggest issue. Even now,
>> it
>> allows 180 000 switches per second (if I'm not wrong). I think,
>> scheduling latency is more important.
> What do you refer to when you say 'scheduling latency'? As in, the
> latency between which events, happening on which component?
I'm worried about interval between task switching events.
For example: vcpu1 is vcpu of some domU and vcpu2 is vcpu of stubdom
that runs device emulator for domU.
vcpu1 issues MMIO access that should be handled by vcpu2 and gets
blocked by hypervisor. Then there will be two context switches:
vcpu1->vcpu2 to emulate that MMIO access and vcpu2->vcpu1 to continue
work. AFAIK, credit2 does not guarantee that vcpu2 will be scheduled
right after when vcpu1 will be blocked. It can schedule some vcpu3,
then vcpu4 and only then come back to vcpu2.  That time interval
between event "vcpu2 was made runable" and event "vcpu2 was scheduled
on pcpu" is what I call 'scheduling latency'.
This latency can be minimized by mechanism similar to priority
inheritance: if scheduler knows that vcpu1 waits for vcpu2 and there
are remaining time slice for vcpu1 it should select vcpu2 as next
scheduled vcpu. Problem is how to populate such dependencies.

WBR Volodymyr Babchuk aka lorc [+380976646013]
mailto: vlad.babchuk@xxxxxxxxx

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.