[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] blkback global resources

>>> On 27.03.12 at 11:41, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> On Mon, 2012-03-26 at 17:20 +0100, Ian Campbell wrote:
>> On Mon, 2012-03-26 at 16:56 +0100, Jan Beulich wrote:
>> > All the resources allocated based on xen_blkif_reqs are global in
>> > blkback. While (without having measured anything) I think that this
>> > is bad from a QoS perspective (not the least implied from a warning
>> > issued by Citrix'es multi-page-ring patches:
>> > 
>> >    if (blkif_reqs < BLK_RING_SIZE(order))
>> >            printk(KERN_WAfdfdRNING "WARNING: "
>> >                   "I/O request space (%d reqs) < ring order %ld, "
>> >                   "consider increasing %s.reqs to >= %ld.",
>> >                   blkif_reqs, order, KBUILD_MODNAME,
>> >                   roundup_pow_of_two(BLK_RING_SIZE(order)));
>> > 
>> > indicating that this _is_ a bottleneck), I'm otoh hesitant to convert
>> > this to per-instance allocations, as the amount of memory taken
>> > away from Dom0 for this may be not insignificant when there are
>> > many devices.
>> > 
> What's your main concern on QoS? Lock contention? Starvation? Or any
> other things?

However you want to put it. Prior to the multi-page ring patches, we
have 64 pending requests (global) and 32 ring entries. Obviously,
bumping the ring size just to order 1 will already bring the number of
possible in-flight entries per device on par with those in-flight across
all devices. So _if_ someone really determined that a multi-page ring
helps performance, I wonder whether that was with manually
adjusted global pending request values (not said anywhere) or with
just a single frontend (not very close to real world scenarios).

In any case, two guests with heavy I/O clearly have the potential to
hinder each other, even if both get backed by different physical


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.