[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] [XCP 1.6 beta] poor performance while importing vm template


  • From: SpamMePlease PleasePlease <spankthespam@xxxxxxxxx>
  • Date: Wed, 3 Oct 2012 08:32:14 +0100
  • Cc: xen-api@xxxxxxxxxxxxx
  • Delivery-date: Wed, 03 Oct 2012 07:32:36 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>

Dear Sebastien,

On Wed, Oct 3, 2012 at 6:22 AM, SÃbastien RICCIO <sr@xxxxxxxxxxxxxxx> wrote:
> Hi,
>
> How are you importing the xva ? from xe command line? from xencenter ?
> Is the source xva file on the same disk(s) than the SR you are importing to
> ?

The xva is being imported via cli, with xe vm-import
filename=/tmp/the.xva. It is located on the same disk, as the local
lvm based SR.

> Have you tried basic disk read or write throughput tests on these disks like
> running a:
> dd if=/dev/zero of=test bs=1M count=2000 oflag=direct


While doing so, the io wait also skyrockets (up to ~60%), along with
load average (up to ~2), with the average speed of 12,8 MB/s. However,
the hdparm reports as below:

/dev/sda:
 Timing cached reads:   23680 MB in  1.99 seconds = 11887.43 MB/sec
 Timing buffered disk reads: 492 MB in  3.01 seconds = 163.45 MB/sec

Also, the smartctl doesnt report anything wrong about the disks, and
the whole server was deeply tested and investigated by server provider
to rule out hardware issues, so I am basically out of ideas,
especially I've second server, with XCP 1.1 on a bit older hardware,
that doesnt show these issues, and is identically configured (well, it
even uses the md raid for / and local lvm SR).

Kind regards,
S.

> Just in order to be sure it's not a disk or driver problem and dismiss that
> possibility.
>
> Cheers,
> SÃbastien
>
>
>
>
>
> On 02.10.2012 17:30, SpamMePlease PleasePlease wrote:
>>
>> Hi all,
>>
>> I would like to report that there's an issue that exists on XCP 1.6
>> beta (and 1.1 stable too) that results in very slow vm import/export,
>> causing high IO wait and load. The details are below, the machine is
>> freshly installed, without any modifications (like md raid, and so
>> on):
>>
>> iostat:
>>
>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>             0,00    0,00    0,00   62,98    0,00   37,02
>>
>> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s avgrq-sz
>> avgqu-sz   await  svctm  %util
>> sda               0,00     0,40  0,00  6,19     0,00  2184,83   353,10
>>      3,76  553,87 161,61 100,00
>> sda1              0,00     0,00  0,00  0,60     0,00    35,13    58,67
>>      0,75 1540,00 893,33  53,49
>> sda2              0,00     0,00  0,00  0,00     0,00     0,00     0,00
>>      0,00    0,00   0,00   0,00
>> sda3              0,00     0,40  0,00  5,59     0,00  2149,70   384,64
>>      3,01  448,21 178,93 100,00
>> sdb               0,00     0,00  0,00  0,00     0,00     0,00     0,00
>>      0,00    0,00   0,00   0,00
>> dm-0              0,00     0,00  0,00  0,00     0,00     0,00     0,00
>>      0,00    0,00   0,00   0,00
>> dm-1              0,00     0,00  0,00  6,39     0,00  2203,79   345,03
>>      3,25  444,38 156,56 100,00
>> tda               0,00   215,57  0,00 25,15     0,00  2181,24    86,73
>>    172,15 7475,32  39,76 100,00
>>
>>
>>
>> top:
>>
>> top - 17:26:48 up 13 min,  2 users,  load average: 2.79, 1.63, 0.77
>> Tasks: 135 total,   1 running, 134 sleeping,   0 stopped,   0 zombie
>> Cpu(s):  0.0%us,  0.0%sy,  0.0%ni, 25.4%id, 74.6%wa,  0.0%hi,  0.0%si,
>> 0.0%st
>> Mem:    762244k total,   747080k used,    15164k free,   238616k buffers
>> Swap:   524280k total,        0k used,   524280k free,   289188k cached
>>
>>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>   5901 root      20   0 26424 8544  268 S  0.3  1.1   0:00.45
>> elasticsyslog
>> 10758 root      20   0  2428 1104  820 R  0.3  0.1   0:00.02 top
>>      1 root      20   0  2164  656  564 S  0.0  0.1   0:00.23 init
>>      2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd
>>      3 root      RT   0     0    0    0 S  0.0  0.0   0:00.00
>> migration/0
>>      4 root      20   0     0    0    0 S  0.0  0.0   0:00.00 ksoftirqd/0
>>
>>
>>
>> lspci:
>>
>> 00:00.0 Host bridge: Intel Corporation 3rd Gen Core processor DRAM
>> Controller (rev 09)
>> 00:01.0 PCI bridge: Intel Corporation 3rd Gen Core processor PCI
>> Express Root Port (rev 09)
>> 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core
>> processor Graphics Controller (rev 09)
>> 00:14.0 USB controller: Intel Corporation 7 Series Chipset Family USB
>> xHCI Host Controller (rev 04)
>> 00:16.0 Communication controller: Intel Corporation 7 Series Chipset
>> Family MEI Controller #1 (rev 04)
>> 00:1a.0 USB controller: Intel Corporation 7 Series Chipset Family USB
>> Enhanced Host Controller #2 (rev 04)
>> 00:1c.0 PCI bridge: Intel Corporation 7 Series Chipset Family PCI
>> Express Root Port 1 (rev c4)
>> 00:1c.4 PCI bridge: Intel Corporation 7 Series Chipset Family PCI
>> Express Root Port 5 (rev c4)
>> 00:1c.6 PCI bridge: Intel Corporation 7 Series Chipset Family PCI
>> Express Root Port 7 (rev c4)
>> 00:1d.0 USB controller: Intel Corporation 7 Series Chipset Family USB
>> Enhanced Host Controller #1 (rev 04)
>> 00:1f.0 ISA bridge: Intel Corporation H77 Express Chipset LPC
>> Controller (rev 04)
>> 00:1f.2 SATA controller: Intel Corporation 7 Series Chipset Family
>> 6-port SATA AHCI Controller (rev 04)
>> 00:1f.3 SMBus: Intel Corporation 7 Series Chipset Family SMBus
>> Controller (rev 04)
>> 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
>> RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 09)
>> 04:00.0 IDE interface: Marvell Technology Group Ltd. 88SE9172 SATA III
>> 6Gb/s RAID Controller (rev 11)
>>
>> There is nothing visible in dmesg nor logs that could give a clue
>> where is the poor performance coming from: the xva was exported from
>> XCP 1.1 in about 3 minutes, the import takes about 30 minutes
>> (confirmed with few different xva's) and while importing the dom0
>> becomes extremely sluggish.
>>
>> Any help would be highly appreciated.
>>
>> Kind regards,
>> S.
>>
>> _______________________________________________
>> Xen-api mailing list
>> Xen-api@xxxxxxxxxxxxx
>> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
>>
>

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.