[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] blkback global resources

On Mon, 2012-03-26 at 21:47 +0100, Matt Wilson wrote:
> On Mon, Mar 26, 2012 at 09:20:08AM -0700, Ian Campbell wrote:
> > On Mon, 2012-03-26 at 16:56 +0100, Jan Beulich wrote:
> > > 
> > > Does anyone have an opinion here, in particular regarding the
> > > original authors' decision to make this global vs. the apparently
> > > made observation (by Daniel Stodden, the author of said patch,
> > > who I don't have any current email of to ask directly), but also
> > > in the context of multi-page rings, the purpose of which is to
> > > allow for larger amounts of in-flight I/O?
> > 
> > Not really much to say other than we (well, mostly Wei Liu) have a
> > similar issue with netback too. That used to have a global pool, then a
> > pool-per-worker thread. When Wei added thread-per-vif support he solved
> > this by adding a "page pool" which handles allocations. Possibly this
> > could grow some sort of fairness etc nobs and be shared with blkback?
> I'd definitely welcome an approach in blkback similar to the page pool
> changes to netback. Steve Noonan's change to use a spinlock per
> blkfront device [1] should show increased contention on the global
> request list on the blkback side when multiple devices are attached.
> Does netback not also need to handle questions of fairness and
> resource starvation?

It does, what I was meaning to say was that the page pool could grow
some fairness nobs for netback use (if it is not already fair) and
separately that it could be shared with blkback (rather than fairness
nobs being related to the sharing with blkback which looking back is how
my original statement could have been read).


> Matt
> [1] 
> http://git.kernel.org/?p=linux/kernel/git/axboe/linux-block.git;a=commitdiff;h=3467811e26660eb46bc655234573d22d6876d5f9

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.