[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/4] xen/netback: Split one page pool into two(tx/rx) page pool.
On Thu, 2012-11-15 at 07:04 +0000, Annie Li wrote: > For tx path, this implementation simplifies the work of searching out > grant page from page pool based on grant reference. It's still a linear search though, and it doesn't look much simpler to me: for (i = 0; i < count; i++) { if (tx_pool) vif = netbk->gnttab_tx_vif[i]; else vif = netbk->gnttab_rx_vif[i]; pers_entry = vif->persistent_gnt; gnt_count = &vif->persistent_gntcnt; gnt_total = MAXIMUM_OUTSTANDING_BLOCK_REQS; becomes: for (i = 0; i < count; i++) { if (tx_pool) { vif = netbk->gnttab_tx_vif[i]; gnt_count = &vif->persistent_tx_gntcnt; gnt_total = XEN_NETIF_TX_RING_SIZE; pers_entry = vif->persistent_tx_gnt; } else { vif = netbk->gnttab_rx_vif[i]; gnt_count = &vif->persistent_rx_gntcnt; gnt_total = 2*XEN_NETIF_RX_RING_SIZE; pers_entry = vif->persistent_rx_gnt; } > @@ -111,8 +109,16 @@ struct xenvif { > > wait_queue_head_t waiting_to_free; > > - struct persistent_entry *persistent_gnt[MAXIMUM_OUTSTANDING_BLOCK_REQS]; > - unsigned int persistent_gntcnt; > + struct persistent_entry *persistent_tx_gnt[XEN_NETIF_TX_RING_SIZE]; > + > + /* > + * 2*XEN_NETIF_RX_RING_SIZE is for the case of each head/fragment page Shouldn't that been incorporated into MAXIMUM_OUTSTANDING_BLOCK_REQS (sic) too? > + * using 2 copy operations. > + */ > + struct persistent_entry *persistent_rx_gnt[2*XEN_NETIF_RX_RING_SIZE]; What is the per-vif memory overhead after this change? Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |