[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback
On 06/30/2015 10:21 PM, Marcus Granado wrote: > On 13/05/15 11:29, Bob Liu wrote: >> >> On 04/28/2015 03:46 PM, Arianna Avanzini wrote: >>> Hello Christoph, >>> >>> Il 28/04/2015 09:36, Christoph Hellwig ha scritto: >>>> What happened to this patchset? >>>> >>> >>> It was passed on to Bob Liu, who published a follow-up patchset here: >>> https://lkml.org/lkml/2015/2/15/46 >>> >> >> Right, and then I was interrupted by another xen-block feature: 'multi-page' >> ring. >> Will back on this patchset soon. Thank you! >> >> -Bob >> > > Hi, > > Our measurements for the multiqueue patch indicate a clear improvement in > iops when more queues are used. > > The measurements were obtained under the following conditions: > > - using blkback as the dom0 backend with the multiqueue patch applied to a > dom0 kernel 4.0 on 8 vcpus. > > - using a recent Ubuntu 15.04 kernel 3.19 with multiqueue frontend applied to > be used as a guest on 4 vcpus > > - using a micron RealSSD P320h as the underlying local storage on a Dell > PowerEdge R720 with 2 Xeon E5-2643 v2 cpus. > > - fio 2.2.7-22-g36870 as the generator of synthetic loads in the guest. We > used direct_io to skip caching in the guest and ran fio for 60s reading a > number of block sizes ranging from 512 bytes to 4MiB. Queue depth of 32 for > each queue was used to saturate individual vcpus in the guest. > > We were interested in observing storage iops for different values of block > sizes. Our expectation was that iops would improve when increasing the number > of queues, because both the guest and dom0 would be able to make use of more > vcpus to handle these requests. > > These are the results (as aggregate iops for all the fio threads) that we got > for the conditions above with sequential reads: > > fio_threads io_depth block_size 1-queue_iops 8-queue_iops > 8 32 512 158K 264K > 8 32 1K 157K 260K > 8 32 2K 157K 258K > 8 32 4K 148K 257K > 8 32 8K 124K 207K > 8 32 16K 84K 105K > 8 32 32K 50K 54K > 8 32 64K 24K 27K > 8 32 128K 11K 13K > > 8-queue iops was better than single queue iops for all the block sizes. There > were very good improvements as well for sequential writes with block size 4K > (from 80K iops with single queue to 230K iops with 8 queues), and no > regressions were visible in any measurement performed. > Great! Thank you very much for the test. I'm trying to rebase these patches to the latest kernel version(v4.1) and will send out in following days. -- Regards, -Bob _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |