[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Software Raid 5 domu performance drop



Software raids by design have huge performance hits. I won't really
use dmraid in production.

On Sun, May 26, 2013 at 5:48 PM, braintorch <kkabardin@xxxxxxxxx> wrote:
> 26.05.2013 15:51, James Harper пишет:
>
>>
>>> -----Original Message-----
>>> From: braintorch [mailto:kkabardin@xxxxxxxxx]
>>> Sent: Sunday, 26 May 2013 9:49 PM
>>> To: James Harper
>>> Cc: xen-users@xxxxxxxxxxxxx
>>> Subject: Re: [Xen-users] Software Raid 5 domu performance drop
>>>
>>> 26.05.2013 15:16, James Harper пишет:
>>>>>
>>>>> 26.05.2013 14:43, James Harper пишет:
>>>>>>>
>>>>>>> Hello. I'm experiencing solid I/O performance drop when using
>>>
>>> software
>>>>>>>
>>>>>>> raid 5 from PV DomU. I can get about 420M/s for sequential reads and
>>>>>>> ~220 M/s for sequential writes when using it from Dom0. But it's only
>>>>>>> ~170 M/s for read and ~80 M/s for write when using it from DomU.
>>>>>>>
>>>>>>> DomU performance for single drive is close to native -- ~160 M/s for
>>>>>>> reads and ~160 for writes.
>>>>>>>
>>>>>>> There is no filesystem or LVM, just raw data. Debian wheezy x86_64
>>>>>>> for
>>>>>>> both Dom0 and DomU. And "phy" backend is used to attach drive to
>>>>>
>>>>> DomU.
>>>>>>>
>>>>>>> Is this a bug or something wrong with my setup? What should I check?
>>>>>>>
>>>>>> How are you measuring this performance?
>>>>>>
>>>>>> James
>>>>>
>>>>> I  running dd if=/dev/zero of=/dev/xvdb bs=1M for several minutes.
>>>>>
>>>>> Also tried "cat /dev/zero | pv -r > /dev/xvdb " which gave me similar
>>>>> results.
>>>>
>>>> Add oflag=direct to the dd command so that no caching is in effect and
>>>
>>> then compare.
>>>>
>>>> James
>>>
>>> James, it's even more dramatic without caching.
>>>
>>>
>>> Dom0:
>>>
>>> Reading:
>>> dd if=/dev/md0 of=/dev/null bs=1M iflag=direct
>>>    скопировано 11659116544 байта (12 GB), 27,4614 c, 425 MB/c
>>>
>>> Writing:
>>> dd if=/dev/zero of=/dev/md0 bs=1M oflag=direct
>>>    скопировано 10108272640 байт (10 GB), 135,859 c, 74,4 MB/c
>>>
>>> Domu:
>>>
>>> Reading:
>>> dd if=/dev/xvdb of=/dev/null iflag=direct
>>>    скопировано 229615104 байта (230 MB), 75,9394 c, 3,0 MB/c
>>>
>>> Writing:
>>> dd if=/dev/zero of=/dev/xvdb oflag=direct
>>>    скопировано 231818240 байт (232 MB), 158,283 c, 1,5 MB/c
>>
>> I don't see a block size on the domu measurements... did you just copy and
>> paste it wrong or did you really leave it at default 512 byte block size?
>>
>> James
>
> Ah, my mistake. I'm sorry. :(
>
> Reading:
> dd if=/dev/xvdb of=/dev/null bs=1M iflag=direct
>  скопировано 13060014080 байт (13 GB), 58,949 c, 222 MB/c
>
> Writing:
> dd if=/dev/zero of=/dev/xvdb bs=1M oflag=direct
>  скопировано 2241855488 байт (2,2 GB), 29,6292 c, 75,7 MB/c
>
> So, writing without caching is almost the same. Reading is halfed in compare
> to dom0, but this is not really an issue to me.
> Is there a way to optimize DomU caching to boost write speed?
>
> Kirill.
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.