[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RT Xen on ARM - R-Car series



Hello Julien,

On 14.02.19 19:29, Julien Grall wrote:
I am not sure why you are speaking about the current implementation
when my point was about the new implementation.

I guess your point stick even if we decide to use guest physical
address.
For sure. The type of guest address used by hypervisor to reach runstate area 
and having that area mapped are quite orthogonal questions. But, IMHO, tightly 
coupled and might be solved together.

Although, I am still unconvinced of the benefits to keep it
mapped.
My point was reducing context switch time, but those controversial numbers left 
me confused.

I've measured the raw `update_runstate_area()` execution time. With runstate 
mapped - its execution time is less than my timer tick (120ns), with runstate 
not mapped - I've seen
its execution time as 4 to 8 ticks (480-960ns).

Please provide the code you use to measure it. How often do you call it?
The code to see that is as simple as following:

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 0a2e997..d673d00 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -314,8 +314,16 @@ static void schedule_tail(struct vcpu *prev)
     context_saved(prev);
if ( prev != current )
+    {
+        s_time_t t = 0;
+        if (current->domain->domain_id == 1)
+            t = NOW();
         update_runstate_area(current);
+ if (current->domain->domain_id == 1)
+            printk("cp = %"PRI_stime"\n", NOW()-t);
+    }
+
     /* Ensure that the vcpu has an up-to-date time base. */
     update_vcpu_system_time(current);
 }

But using TBM, I encountered that making runstate mapped with Roger's patch 
increases the IRQ latency from ~7000ns to ~7900ns. It is opposite to my 
expectations and to the raw decrease of `runstate_update_area()` execution time.

Raw benchmarks should be taken with a grain of salt. The more if you
only benchmark a single function as the context switch may introduce
latency/cache eviction.

Although, I would have expected the numbers to be the same. What is
your configuration here? Do you have others guests running? How many
context switch do you have?

The configuration is the same as here [1].


Also, what are the modifications you made in TBM to use runstate?

I've just registered the runstate area with following [2].

[1] https://lists.xenproject.org/archives/html/xen-devel/2018-12/msg00881.html
[2] 
https://github.com/aanisov/tbm/commit/806e8a7e6e6c72aef10f1c8a579424252aca2635


--
Sincerely,
Andrii Anisov.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.