[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Read Performance issue when Xen Hypervisor is activated



On Mon, 2017-01-02 at 07:15 +0000, Michael Schinzel wrote:
> Good Morning,
>
I'm back, although, as anticipate, I can't be terribly useful, I'm
afraid...

> You can see, in default Xen configuration, the most important thing
> at read performance test -> 2414.92 MB/sec <- the used cache is half
> of the cache like the same host is bootet without hypervisor. We now
> searched and searched and searched and find the Case:
> xen_acpi_processor
> 
> Xen is manageing the CPU Performance default with 1.200 Mhz. It is
> like you are driving a Ferrari all the time with 30 miles/h :) So we
> changed the Performance parameter to
> 
>  xenpm set-scaling-governor all performance
> 
Well, yes, this will have an impact, but it's unlikely what you're
looking for. In fact, something similar would apply also to baremetal
Linux.

> After a little bit searching around, i also find a parameter for the
> scheduler.
> 
> root@v7:~# cat /sys/block/sda/queue/scheduler
> noop deadline [cfq]
> 
> I changed the scheduler to deadline.  After this Change
> 
Well, ISTR [nop] could be even better. But I don't think this will make
much difference either, in this case.

> We have already tried to remove the CPU reservation, memory limit and
> so on but this don't change anythink. Also upgrading the Hypervisor
> dont change anythink at this performance issue. 
>  
Well, these are all sequential benchmarks, so it indeed could have been
expected that adding more vCPUs wouldn't have changed things much.

I decided to re-run some of your tests on my test hardware (which is
way lower end than yours, especially as far as storage is concerned).

These are m results:

 hdparm -Tt /dev/sda                   Without Xen (baremetal Linux)            
        With Xen (from within dom0)
 Timing cached reads         14074 MB in  2.00 seconds = 7043.05 MB/sec     
14694 MB in  1.99 seconds = 7382.22 MB/sec
 Timing buffered disk reads    364 MB in  3.01 seconds =  120.78 MB/sec       
364 MB in  3.00 seconds =  121.22 MB/sec


 dd_obs_test.sh datei                  transfer rate
 block size   Without Xen (baremetal Linux)   With Xen (from within dom0)
        512            279 MB/s                      123 MB/s
       1024            454 MB/s                      217 MB/s
       2048            275 MB/s                      359 MB/s
       4096            888 MB/s                      532 MB/s
       8192            987 MB/s                      659 MB/s
      16384            1.0 GB/s                      685 MB/s
      32768            1.1 GB/s                      773 MB/s
      65536            1.1 GB/s                      846 MB/s
     131072            1.1 GB/s                      749 MB/s
     262144            327 MB/s                      844 MB/s
     524288            1.1 GB/s                      783 MB/s
    1048576            420 MB/s                      823 MB/s
    2097152            485 MB/s                      305 MB/s
    4194304            409 MB/s                      783 MB/s
    8388608            380 MB/s                      776 MB/s
   16777216            950 MB/s                      703 MB/s
   33554432            916 MB/s                      297 MB/s
   67108864            856 MB/s                      492 MB/s


time dd if=/dev/zero of=datei bs=1M count=10240
  Without Xen (baremetal Linux)   With Xen (from within dom0)
       73.7224 s, 146 MB/s            97.6948 s, 110 MB/s
            real 1m13.724s                 real 1m37.700s
             user 0m0.000s                 user  0m0.068s
             sys  0m9.364s                 sys  0m15.180s


root@Zhaman:~# time dd if=datei of=/dev/null
  Without Xen (baremetal Linux)   With Xen (from within dom0)
       9.92787 s, 1.1 GB/s            95.1827 s, 113 MB/s
             real 0m9.953s                 real 1m35.194s
             user 0m2.096s                 user 0m10.632s
              sys 0m7.300s                  sys 0m51.820s

Which confirms that, when running the tests inside a Xen Dom0, things
are indeed slower.

Let me say something, though: the purpose of Xen is not to achieve the
best possible performance in Dom0. In fact, it is to achieve the best
possible aggregated performance of a number of guest domains.

The fact that virtualization has an overhead and that Dom0 pays quite a
high price are well known. Have you tried, for instance, running some
of the test in a DomU?

Now, whether what both you and I are seeing is to be considered
"normal", I can't tell. Maybe Roger can (or he can tell us who to
bother for that).

In general, I don't think updating random system and firmware
components is useful at all... This is not a BIOS issue, IMO.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.