[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: IRQ latency measurements in hypervisor



On Thu, 2021-01-14 at 15:33 -0800, Stefano Stabellini wrote:
> On Tue, 12 Jan 2021, Volodymyr Babchuk wrote:
> > 2. RTDS scheduler. With console disabled, things like "hexdump -v
> >    /dev/zero" didn't affected the latency so badly, but anyways,
> >    sometimes I got ~600us spikes. This is not a surprise, because
> > of
> >    default RTDS configuration. I changed period for DomU from
> > default
> >    10ms to 100us and things got better: with Dom0 burning CPU I am
> >    rarely getting max latency of about ~30us with mean latency of
> > ~9us
> >    and deviation of ~0.5us. On other hand, when I tried to set
> > period
> >    to 30us, max latency rose up to ~60us.
> 
> This is very interestingi too. Did you get any spikes with the period
> set to 100us? It would be fantastic if there were none.
> 
This *probably* makes some sense. Where the *probably* comes from the
fact that all the following reasoning assumes that what I recall about
real-time scheduling theory is correct, on which I would not bet.

Perhaps Stefano can ask to my good old friend Marko Bertogna, from the
Univeristy of Modena, as they're collaborating on cache-coloring, what
he his team think. He was already much better than me with this things,
back in the days of the Ph.D... So for sure he's much better than me
know! :-)

Anyway, as I was saying, having a latency which is ~ 2x of your period
is ok, and it should be expected (when you size the period). In fact,
let's say that your budget is Xus, and your period is 30us. This means
that you get to execute for Xus every 30us. So, basically, at time t0
you are given a budget of Xus and you are guaranteed to be able to use
it all within time t1=t0+30us. At that time (t1=t0+30us) you are given
another Xus amount of budget, and you are guaranteed to be able to use
it all within t2=t1+30us=t0+60us.

Now, with a period as small as 30us, your budget is also going to be
pretty small (how much was that? If it was in your mail, I must have
missed it). Are you sure that the vCPU is able to wake up and run until
the point that your driver has done all the latency measurement in
_just_one_ instance (i.e., all this takes less than the budget)?.

In fact, lat's say your budget is 10us, and it the vCPU needs 15us for
waking up and doing the measurements. At time t0 the vCPU is scheduler,
and let's say that the latency at that time is exactly 0. The vCPU
start to run but, at time t0+10us (where 10us is the budget) it is
stopped. at time t1=t0+30us, the vCPU receives a budget replenishment
but that does not mean that it will start to run immediately, if the
system is busy.

In fact, what RTDS guarantees is that the vCPU will be able to execute
for 10us within time t2=t1+30us. So, in theory, it can start to run as
far as t2-10us, without violating any guarantee.

If that is, in fact, what happens, i.e., the vCPU starts to run only at
t2-10us, and it is only then that it manages to finish computing and
recording the latency (after running for 5us more, as we said it takes
15us).

What it will therefore record would be a latency to t2-5us, which in
fact is:

  t1 + 30us - 5us = t0 + 30us + 30us - 5us =
= 0 + 60us - 5us = 55us ~= 60us

So... May this be the case?

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

Attachment: signature.asc
Description: This is a digitally signed message part


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.