[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 10/10] xen/blkback: make pool of persistent grants and free pages per-queue
On Mon, Nov 02, 2015 at 12:21:46PM +0800, Bob Liu wrote: > Make pool of persistent grants and free pages per-queue/ring instead of > per-device to get better scalability. How much better scalability do we get? .. snip .. > > > /* > - * pers_gnts_lock must be used around all the persistent grant helpers > - * because blkback may use multi-thread/queue for each backend. > + * We don't need locking around the persistent grant helpers > + * because blkback uses a single-thread for each backed, so we s/backed/backend/ > + * can be sure that this functions will never be called recursively. > + * > + * The only exception to that is put_persistent_grant, that can be called > + * from interrupt context (by xen_blkbk_unmap), so we have to use atomic > + * bit operations to modify the flags of a persistent grant and to count > + * the number of used grants. > */ ..snip.. > --- a/drivers/block/xen-blkback/common.h > +++ b/drivers/block/xen-blkback/common.h > @@ -282,6 +282,22 @@ struct xen_blkif_ring { > spinlock_t pending_free_lock; > wait_queue_head_t pending_free_wq; > > + /* buffer of free pages to map grant refs */ Full stop. > + spinlock_t free_pages_lock; > + int free_pages_num; > + struct list_head free_pages; > + > + /* tree to store persistent grants */ Full stop. > + spinlock_t pers_gnts_lock; > + struct rb_root persistent_gnts; > + unsigned int persistent_gnt_c; > + atomic_t persistent_gnt_in_use; > + unsigned long next_lru; > + > + /* used by the kworker that offload work from the persistent purge */ Full stop. > + struct list_head persistent_purge_list; > + struct work_struct persistent_purge_work; > + > /* statistics */ > unsigned long st_print; > unsigned long long st_rd_req; _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |