[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] File-based domU - Slow storage write since xen 4.8



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Hi, thanks for your answer.

On 07/13/17 19:20, WebDawg wrote:
> Now this is very interesting.   Is it the same hardware, switches,
> nics and all?

It is the same physical machine, I just switched the OS version.

> I might start with looking at NFS.  The new kernel might require
> some different settings to match what the old did automatically.  I
> do not know what has changed in NFS.
> 
> You are saying you tested NFS speed on a dom0?  I know dom0 is more
> like a domU then most think but what tests do you get out of that?

I realized I did not explain our setup in detail.

This is the domU configuration (Using pv) :

kernel      = '/data/boot/linux-4.9.0/vmlinuz-4.9.0-0.bpo.2-amd64'
ramdisk     = '/data/boot/linux-4.9.0/initrd.img-4.9.0-0.bpo.2-amd64'
vcpus       = '1'
memory      = '1000'
root        = '/dev/xvda'
disk        = [
                'file:/media/datastores/1/d-anb-nab2_root.img,xvda,w',
                'file:/media/datastores/1/d-anb-nab2_data.img,xvdb,w',
                ]
name        = 'd-anb-nab2'
vif         = [
                'ip=10.0.2.1,mac=00:16:3E:00:02:01, bridge=br2,
vifname=d-anb-nab2.e0',
                'ip=10.0.1.1,mac=00:16:3E:00:01:01, bridge=br3,
vifname=d-anb-nab2.e1'
                ]
on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'

/media/datastores/1/ is a NFS share from a NetApp appliance, mounted
on the dom0. So our domU do not use NFS directly, only the dom0 will
use NFS.

As per this configuration, the raw image file is setup as a loop
device before being exposed as /dev/xvd* to the DomU.


When the server is running Debian Jessie and Xen4.4 (the domU is
always the same, running Debian Jessie and backported kernel 4.9.0) :

dom0# xl info
host                   : ikea
release                : 3.16.0-4-amd64
version                : #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19)
machine                : x86_64
nr_cpus                : 24
max_cpu_id             : 23
nr_nodes               : 4
cores_per_socket       : 12
threads_per_core       : 1
cpu_mhz                : 2200
hw_caps                :
178bf3ff:efd3fbff:00000000:00001300:00802001:00000000:000837ff:00000000
virt_caps              : hvm
total_memory           : 49147
free_memory            : 44580
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 4
xen_extra              : .1
xen_version            : 4.4.1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          :
xen_commandline        : dom0_mem=2G,max:2G dom0_max_vcpus=2
dom0_vcpus_pin
cc_compiler            : gcc (Debian 4.9.2-10) 4.9.2
cc_compile_by          : carnil
cc_compile_domain      : debian.org
cc_compile_date        : Wed Dec  7 06:03:38 UTC 2016
xend_config_format     : 4

dom0# dd if=/dev/zero of=/media/datastores/1/d-anb-nab2_data.img bs=4M
count=$((5*1024/4))
1280+0 records in
1280+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 48.6082 s, 110 MB/s

domU# dd if=/dev/zero of=/dev/xvdb bs=4M count=$((5*1024/4))
1280+0 records in
1280+0 records out
5368709120 bytes (5.4 GB) copied, 47.7129 s, 113 MB/s

When I reboot the server under Debian Stretch and xen4.8

dom0# xl info
host                   : ikea
release                : 4.9.0-3-amd64
version                : #1 SMP Debian 4.9.30-2 (2017-06-12)
machine                : x86_64
nr_cpus                : 24
max_cpu_id             : 23
nr_nodes               : 4
cores_per_socket       : 12
threads_per_core       : 1
cpu_mhz                : 2200
hw_caps                :
178bf3ff:80802001:efd3fbff:000837ff:00000000:00000000:00000000:00000100
virt_caps              : hvm
total_memory           : 49147
free_memory            : 45574
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 8
xen_extra              : .1
xen_version            : 4.8.1
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          :
xen_commandline        : dom0_mem=2G,max:2G dom0_max_vcpus=2
dom0_vcpus_pin
cc_compiler            : gcc (Debian 6.3.0-16) 6.3.0 20170425
cc_compile_by          : ian.jackson
cc_compile_domain      : eu.citrix.com
cc_compile_date        : Tue May  2 14:06:04 UTC 2017
build_id               : 0b619fa14fca0e6ca76f2a8b52eba64d60aa37de
xend_config_format     : 4

dom0# dd if=/dev/zero of=/media/datastores/1/d-anb-nab2_data.img bs=4M
count=$((5*1024/4))
1280+0 records in
1280+0 records out
5368709120 bytes (5.4 GB, 5.0 GiB) copied, 48.6297 s, 110 MB/s

domU# dd if=/dev/zero of=/dev/xvdb bs=4M count=$((5*1024/4))
1280+0 records in
1280+0 records out
5368709120 bytes (5.4 GB) copied, 170.778 s, 31.4 MB/s


However, when the domU use a physical device, such as a LVM logical
volume, there are no issues and the results are consistent between
xen4.4 and 4.8.


So the million dollar question is : What happens (most probably in
blkback/blkfront afaict) when the backend is a loop device, that
doesn't happen when it's a LVM logical volume ?



- -- 
Benoit Depail
Senior Infrastructures Architect
NBS System
+33 158 565 611
-----BEGIN PGP SIGNATURE-----

iQKxBAEBCgCbFiEEUVI4Zy9ojyLG8rQSok/WASnIOwQFAllsfytfFIAAAAAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDUx
NTIzODY3MkY2ODhGMjJDNkYyQjQxMkEyNEZENjAxMjlDODNCMDQdHGJlbm9pdC5k
ZXBhaWxAbmJzLXN5c3RlbS5jb20ACgkQok/WASnIOwTjXRAAuieF4wLKoPEilADD
opOK68GjnSCUp2LuovrtFebfw7Wps016g+eMsNy20oOLCxjSzRVFDNqhkoCc4mVB
iLm55byV4LchafbLBHTshkYDwyuzqFP6cl/9uR1n/mzaH6EVhTiGvwOwpJosH5IM
WGRRmXNdC4IdKGrAFfoBgC+8Ymnsr2vf85BIR6gc5ValiaDLmWPQyYZafxSVjV64
caQmnC0sK0cgKCutLWRia/xO1/2ZsrTJq3T1w8enh1SehKBWlzX0bJWW6MabNUni
ovTgC3TyqNSu3OVPdSs6zo7F2YtXK4dXZ6v6HYht85WY8hb0vwYkorLR/b5WSiR2
MFyH11KOgHN0vWA7ZsH1wMh/TtwgsRK78pW4DGVRwo23ySeVM8oV++x+96q53pdC
wq63nJ0BAX04Kh4gayDkwBlflOxJUxaLB9RouqD8EWHLjt/BQ1jLKN1LgigVhV7e
grXpbn0B8u4UHhYh9hVdp1YgmGeko117ZTx1CAtbxL9Vgng0uE70SAwRDxkxinOZ
VXv9tIRs1IB8aDFSNWj7YuERe3gSq6JXUJL9mlAimkrI8hQzHA8iV1QohyP0K33u
dFMJe2zcJFXjVq3+z/1mKCdYdipJdd4ew54oLiPOSzSIzU13gWeiRdK/rWIWwqRE
srIh8U6CBzB5q2KpE2A3poejvco=
=2pNr
-----END PGP SIGNATURE-----

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.