[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen: add persistent grants to xbd_xenbus



On 10/11/12 11:45, Manuel Bouyer wrote:
> On Sun, Nov 04, 2012 at 11:23:50PM +0100, Roger Pau Monne wrote:
>> This patch implements persistent grants for xbd_xenbus (blkfront in
>> Linux terminology). The effect of this change is to reduce the number
>> of unmap operations performed, since they cause a (costly) TLB
>> shootdown. This allows the I/O performance to scale better when a
>> large number of VMs are performing I/O.
>>
>> On startup xbd_xenbus notifies the backend driver that it is using
>> persistent grants. If the backend driver is not capable of persistent
>> mapping, xbd_xenbus will still use the same grants, since it is
>> compatible with the previous protocol, and simplifies the code
>> complexity in xbd_xenbus.
>>
>> Each time a request is send to the backend driver, xbd_xenbus will
>> check if there are free grants already mapped and will try to use one
>> of those. If there are not enough grants already mapped, xbd_xenbus
>> will request new grants, and they will be added to the list of free
>> grants when the transaction is finished, so they can be reused. Data
>> has to be copied from the request (struct buf) to the mapped grant, or
>> from the mapped grant to the request, depending on the operation being
>> performed.
>>
>> To test the performance impact of this patch I've benchmarked the
>> number of IOPS performed simultaneously by 15 NetBSD DomU guests on a
>> Linux Dom0 that supports persistent grants. The backend used to
>> perform this tests was a ramdisk of size 1GB.
>>
>>                      Sum of IOPS
>> Non-persistent            336718
>> Persistent                686010
>>
>> As seen, there's a x2 increase in the total number of IOPS being
>> performed. As a reference, using exactly the same setup Linux DomUs
>> are able to achieve 1102019 IOPS, so there's still plenty of room for
>> improvement.
> 
> I'd like to see a similar test run against a NetBSD dom0. Also, how big
> are your IOPs ?

I will try to perform a similar test with a NetBSD Dom0, but with the
patch it will probably be a little bit slower than without it (because
we have to perform the memcpy in the frontend, but anyway NetBSD block
frontend was also performing a memcpy if the memory region was not
aligned to PAGE_SIZE).

I have to check, but I belive NetBSD Dom0 will not benefit from
persistent grants in the backend driver, because it is limited to only
one vcpu, and my guess is that with only one vcpu the TLB flush will be
faster than the memcpy (anyway I need to actually check this).

The test was performed with fio, using the following configuration:

[read]
rw=read
size=900m
bs=4k
filename=/foo/bar

Where /foo/bar is the mountpoint of a disk backed with a ramdisk on the
Dom0, the filesystem is FFSv2 without WAPL and default blocksize. Block
size used on the test is 4k, the test was performed 3 times and averaged.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.