[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/4] xen/netback: Split one page pool into two(tx/rx) page pool.
On 2012-11-15 17:15, Ian Campbell wrote: On Thu, 2012-11-15 at 07:04 +0000, Annie Li wrote:For tx path, this implementation simplifies the work of searching out grant page from page pool based on grant reference.It's still a linear search though, and it doesn't look much simpler to me: for (i = 0; i< count; i++) { if (tx_pool) vif = netbk->gnttab_tx_vif[i]; else vif = netbk->gnttab_rx_vif[i]; pers_entry = vif->persistent_gnt; gnt_count =&vif->persistent_gntcnt; gnt_total = MAXIMUM_OUTSTANDING_BLOCK_REQS; becomes: for (i = 0; i< count; i++) { if (tx_pool) { vif = netbk->gnttab_tx_vif[i]; gnt_count =&vif->persistent_tx_gntcnt; gnt_total = XEN_NETIF_TX_RING_SIZE; pers_entry = vif->persistent_tx_gnt; } else { vif = netbk->gnttab_rx_vif[i]; gnt_count =&vif->persistent_rx_gntcnt; gnt_total = 2*XEN_NETIF_RX_RING_SIZE; pers_entry = vif->persistent_rx_gnt; } Yes, the code is not simpler. If we make netback per-VIF based, then these code will disappear. The simplifying here means for tx path, the max search index is XEN_NETIF_TX_RING_SIZE(256 here), and this change can save some time when searching out grant page for specific grant reference. @@ -111,8 +109,16 @@ struct xenvif { wait_queue_head_t waiting_to_free; - struct persistent_entry *persistent_gnt[MAXIMUM_OUTSTANDING_BLOCK_REQS]; - unsigned int persistent_gntcnt; + struct persistent_entry *persistent_tx_gnt[XEN_NETIF_TX_RING_SIZE]; + + /* + * 2*XEN_NETIF_RX_RING_SIZE is for the case of each head/fragment pageShouldn't that been incorporated into MAXIMUM_OUTSTANDING_BLOCK_REQS (sic) too? Yes, the total value is same as MAXIMUM_OUTSTANDING_BLOCK_REQS. But here 2*XEN_NETIF_RX_RING_SIZE means it is only used by rx path, and it is used just like other elements in netback structure, such as grant_copy_op, meta, etc. + * using 2 copy operations. + */ + struct persistent_entry *persistent_rx_gnt[2*XEN_NETIF_RX_RING_SIZE];What is the per-vif memory overhead after this change? Per-vif memory overhead is following,for tx path, it is about XEN_NETIF_RX_RING_SIZE*PAGE_SIZE (256 PAGE_SIZE here) for rx path, it is about 2*XEN_NETIF_RX_RING_SIZE*PAGE_SIZE (512 PAGE_SIZE here) I can add some comment here. Thanks Annie Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |