[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] File-based domU - Slow storage write since xen 4.8



On 07/21/17 18:07, Roger Pau Monné wrote:
> On Fri, Jul 21, 2017 at 11:53:33AM -0400, Keith Busch wrote:
>> On Fri, Jul 21, 2017 at 12:19:39PM +0200, Benoit Depail wrote:
>>> On 07/20/17 19:36, Keith Busch wrote:
>>>>
>>>> As a test, could you throttle the xvdb queue's max_sectors_kb? If I
>>>> followed xen-blkfront correctly, the default should have it set to 44.
>>>> Try setting it to 40.
>>>>
>>>>   echo 40 > /sys/block/xvdb/queue/max_sectors_kb
>>>>
>>>
>>> The default value on my domU is 128.
>>>
>>> I ran a couple of tests with different values, starting from 40 and up
>>> to 128, clearing the cache between each tests.
>>>
>>> The only value that showed the issue is 128. Even setting max_sectors_kb
>>> to 127 is enough to get normal behaviour.
>>
>> Ok, I don't quite follow how it's initialized to 128k, but your
>> results appear to confirm the default settings are not optimal for the
>> interface. The patch you identified just submits commands to the max
>> size the driver asked to use. If the largest size tanks performance,
>> I think xen-blkfront should register a smaller transfer size, or maybe
>> some other constraint needs to be refined.
>>
>> Roger,
>>
>> I'm a bit out of my depth here in the xen code. Is this something you
>> may be able to help clarify?
> 
> Hm, I'm not sure I follow either. AFAIK this problem came from
> changing the Linux version in the Dom0 (where the backend, blkback is
> running), rather than in the DomU right?
> 
> Regarding the queue/sectors stuff, blkfront uses several blk_queue
> functions to set those parameters, maybe there's something wrong
> there, but I cannot really spot what it is:
> 
> http://elixir.free-electrons.com/linux/latest/source/drivers/block/xen-blkfront.c#L929
> 
> In the past the number of pages that could fit in a single ring
> request was limited to 11, but some time ago indirect descriptors
> where introduced in order to lift this limit, and now requests can
> have a much bigger number of pages.
> 
> Could you check the max_sectors_kb of the underlying storage you are
> using in Dom0?
> 
> Roger.
> 
I checked the value for the loop device as well

With 4.4.77 (bad write performance)
$ cat /sys/block/sda/queue/max_sectors_kb
1280
$ cat /sys/block/loop1/queue/max_sectors_kb
127


With 4.1.42 (normal write performance)
$ cat /sys/block/sda/queue/max_sectors_kb
4096
$ cat /sys/block/loop1/queue/max_sectors_kb
127

Regards,

-- 
Benoit Depail
Senior Infrastructures Architect
NBS System

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.