[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH net-next v1] xen-netback: make copy batch size configurable
> -----Original Message----- > From: Joao Martins [mailto:joao.m.martins@xxxxxxxxxx] > Sent: 13 November 2017 11:54 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: netdev@xxxxxxxxxxxxxxx; Wei Liu <wei.liu2@xxxxxxxxxx>; xen- > devel@xxxxxxxxxxxxxxxxxxxx > Subject: Re: [PATCH net-next v1] xen-netback: make copy batch size > configurable > > On 11/13/2017 10:33 AM, Paul Durrant wrote: > >> -----Original Message----- > >> From: Joao Martins [mailto:joao.m.martins@xxxxxxxxxx] > >> Sent: 10 November 2017 19:35 > >> To: netdev@xxxxxxxxxxxxxxx > >> Cc: Joao Martins <joao.m.martins@xxxxxxxxxx>; Wei Liu > >> <wei.liu2@xxxxxxxxxx>; Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen- > >> devel@xxxxxxxxxxxxxxxxxxxx > >> Subject: [PATCH net-next v1] xen-netback: make copy batch size > >> configurable > >> > >> Commit eb1723a29b9a ("xen-netback: refactor guest rx") refactored Rx > >> handling and as a result decreased max grant copy ops from 4352 to 64. > >> Before this commit it would drain the rx_queue (while there are > >> enough slots in the ring to put packets) then copy to all pages and write > >> responses on the ring. With the refactor we do almost the same albeit > >> the last two steps are done every COPY_BATCH_SIZE (64) copies. > >> > >> For big packets, the value of 64 means copying 3 packets best case > scenario > >> (17 copies) and worst-case only 1 packet (34 copies, i.e. if all frags > >> plus head cross the 4k grant boundary) which could be the case when > >> packets go from local backend process. > >> > >> Instead of making it static to 64 grant copies, lets allow the user to > >> select its value (while keeping the current as default) by introducing > >> the `copy_batch_size` module parameter. This allows users to select > >> the higher batches (i.e. for better throughput with big packets) as it > >> was prior to the above mentioned commit. > >> > >> Signed-off-by: Joao Martins <joao.m.martins@xxxxxxxxxx> > >> --- > >> drivers/net/xen-netback/common.h | 6 ++++-- > >> drivers/net/xen-netback/interface.c | 25 > ++++++++++++++++++++++++- > >> drivers/net/xen-netback/netback.c | 5 +++++ > >> drivers/net/xen-netback/rx.c | 5 ++++- > >> 4 files changed, 37 insertions(+), 4 deletions(-) > >> > >> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen- > >> netback/common.h > >> index a46a1e94505d..a5fe36e098a7 100644 > >> --- a/drivers/net/xen-netback/common.h > >> +++ b/drivers/net/xen-netback/common.h > >> @@ -129,8 +129,9 @@ struct xenvif_stats { > >> #define COPY_BATCH_SIZE 64 > >> > >> struct xenvif_copy_state { > >> - struct gnttab_copy op[COPY_BATCH_SIZE]; > >> - RING_IDX idx[COPY_BATCH_SIZE]; > >> + struct gnttab_copy *op; > >> + RING_IDX *idx; > >> + unsigned int size; > > > > Could you name this batch_size, or something like that to make it clear > what it means? > > > Yeap, will change it. > > >> unsigned int num; > >> struct sk_buff_head *completed; > >> }; > >> @@ -381,6 +382,7 @@ extern unsigned int rx_drain_timeout_msecs; > >> extern unsigned int rx_stall_timeout_msecs; > >> extern unsigned int xenvif_max_queues; > >> extern unsigned int xenvif_hash_cache_size; > >> +extern unsigned int xenvif_copy_batch_size; > >> > >> #ifdef CONFIG_DEBUG_FS > >> extern struct dentry *xen_netback_dbg_root; > >> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen- > >> netback/interface.c > >> index d6dff347f896..a558868a883f 100644 > >> --- a/drivers/net/xen-netback/interface.c > >> +++ b/drivers/net/xen-netback/interface.c > >> @@ -516,7 +516,20 @@ struct xenvif *xenvif_alloc(struct device *parent, > >> domid_t domid, > >> > >> int xenvif_init_queue(struct xenvif_queue *queue) > >> { > >> + int size = xenvif_copy_batch_size; > > > > unsigned int > >>> int err, i; > >> + void *addr; > >> + > >> + addr = vzalloc(size * sizeof(struct gnttab_copy)); > > > > Does the memory need to be zeroed? > > > It doesn't need to be but given that xenvif_queue is zeroed (which included > this > region) thus thought I would leave the same way. Ok. > > >> + if (!addr) > >> + goto err; > >> + queue->rx_copy.op = addr; > >> + > >> + addr = vzalloc(size * sizeof(RING_IDX)); > > > > Likewise. > > > >> + if (!addr) > >> + goto err; > >> + queue->rx_copy.idx = addr; > >> + queue->rx_copy.size = size; > >> > >> queue->credit_bytes = queue->remaining_credit = ~0UL; > >> queue->credit_usec = 0UL; > >> @@ -544,7 +557,7 @@ int xenvif_init_queue(struct xenvif_queue > *queue) > >> queue->mmap_pages); > >> if (err) { > >> netdev_err(queue->vif->dev, "Could not reserve > >> mmap_pages\n"); > >> - return -ENOMEM; > >> + goto err; > >> } > >> > >> for (i = 0; i < MAX_PENDING_REQS; i++) { > >> @@ -556,6 +569,13 @@ int xenvif_init_queue(struct xenvif_queue > *queue) > >> } > >> > >> return 0; > >> + > >> +err: > >> + if (queue->rx_copy.op) > >> + vfree(queue->rx_copy.op); > > > > vfree is safe to be called with NULL. > > > Oh, almost forgot - thanks. > > >> + if (queue->rx_copy.idx) > >> + vfree(queue->rx_copy.idx); > >> + return -ENOMEM; > >> } > >> > >> void xenvif_carrier_on(struct xenvif *vif) > >> @@ -788,6 +808,9 @@ void xenvif_disconnect_ctrl(struct xenvif *vif) > >> */ > >> void xenvif_deinit_queue(struct xenvif_queue *queue) > >> { > >> + vfree(queue->rx_copy.op); > >> + vfree(queue->rx_copy.idx); > >> + queue->rx_copy.size = 0; > >> gnttab_free_pages(MAX_PENDING_REQS, queue->mmap_pages); > >> } > >> > >> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen- > >> netback/netback.c > >> index a27daa23c9dc..3a5e1d7ac2f4 100644 > >> --- a/drivers/net/xen-netback/netback.c > >> +++ b/drivers/net/xen-netback/netback.c > >> @@ -96,6 +96,11 @@ unsigned int xenvif_hash_cache_size = > >> XENVIF_HASH_CACHE_SIZE_DEFAULT; > >> module_param_named(hash_cache_size, xenvif_hash_cache_size, uint, > >> 0644); > >> MODULE_PARM_DESC(hash_cache_size, "Number of flows in the hash > >> cache"); > >> > >> +/* This is the maximum batch of grant copies on Rx */ > >> +unsigned int xenvif_copy_batch_size = COPY_BATCH_SIZE; > >> +module_param_named(copy_batch_size, xenvif_copy_batch_size, uint, > >> 0644); > >> +MODULE_PARM_DESC(copy_batch_size, "Maximum batch of grant > copies > >> on Rx"); > >> + > >> static void xenvif_idx_release(struct xenvif_queue *queue, u16 > >> pending_idx, > >> u8 status); > >> > >> diff --git a/drivers/net/xen-netback/rx.c b/drivers/net/xen-netback/rx.c > >> index b1cf7c6f407a..793a85f61f9d 100644 > >> --- a/drivers/net/xen-netback/rx.c > >> +++ b/drivers/net/xen-netback/rx.c > >> @@ -168,11 +168,14 @@ static void xenvif_rx_copy_add(struct > >> xenvif_queue *queue, > >> struct xen_netif_rx_request *req, > >> unsigned int offset, void *data, size_t len) > >> { > >> + unsigned int batch_size; > >> struct gnttab_copy *op; > >> struct page *page; > >> struct xen_page_foreign *foreign; > >> > >> - if (queue->rx_copy.num == COPY_BATCH_SIZE) > >> + batch_size = min(xenvif_copy_batch_size, queue->rx_copy.size); > > > > Surely queue->rx_copy.size and xenvif_copy_batch_size are always > identical? Why do you need this statement (and hence stack variable)? > > > This statement was to allow to be changed dynamically and would affect all > newly > created guests or running guests if value happened to be smaller than > initially > allocated. But I suppose I should make behaviour more consistent with the > other > params we have right now and just look at initially allocated one > `queue->rx_copy.batch_size` ? Yes, that would certainly be consistent but I can see value in allowing it to be dynamically tuned, so perhaps adding some re-allocation code to allow the batch to be grown as well as shrunk might be nice. Paul > > Joao _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |