|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 00/10] xen-block: multi hardware-queues/rings support
On Sat, Nov 14, 2015 at 11:12:09AM +0800, Bob Liu wrote:
> Note: These patches were based on original work of Arianna's internship for
> GNOME's Outreach Program for Women.
>
> After using blk-mq api, a guest has more than one(nr_vpus) software request
> queues associated with each block front. These queues can be mapped over
> several
> rings(hardware queues) to the backend, making it very easy for us to run
> multiple threads on the backend for a single virtual disk.
>
> By having different threads issuing requests at the same time, the performance
> of guest can be improved significantly.
>
> Test was done based on null_blk driver:
> dom0: v4.3-rc7 16vcpus 10GB "modprobe null_blk"
Surely v4.4-rc1?
> domU: v4.3-rc7 16vcpus 10GB
Ditto.
>
> [test]
> rw=read
> direct=1
> ioengine=libaio
> bs=4k
> time_based
> runtime=30
> filename=/dev/xvdb
> numjobs=16
> iodepth=64
> iodepth_batch=64
> iodepth_batch_complete=64
> group_reporting
>
> Results:
> iops1: After commit("xen/blkfront: make persistent grants per-queue").
> iops2: After commit("xen/blkback: make persistent grants and free pages pool
> per-queue").
>
> Queues: 1 4 8 16
> Iops orig(k): 810 1064 780 700
> Iops1(k): 810 1230(~20%) 1024(~20%) 850(~20%)
> Iops2(k): 810 1410(~35%) 1354(~75%) 1440(~100%)
Wholy cow. That is some contention on a lock (iops1 vs iops2). Thank you for
running these numbers.
>
> With 4 queues after this series we can get ~75% increase in IOPS, and
> performance won't drop if incresing queue numbers.
>
> Please find the respective chart in this link:
> https://www.dropbox.com/s/agrcy2pbzbsvmwv/iops.png?dl=0
Thank you for that link.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |