[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net-next v7 4/9] xen-netback: Introduce TX grant mapping



On 13/03/14 11:02, Ian Campbell wrote:
> On Thu, 2014-03-13 at 10:56 +0000, David Vrabel wrote:
>> On 13/03/14 10:33, Ian Campbell wrote:
>>> On Thu, 2014-03-06 at 21:48 +0000, Zoltan Kiss wrote:
>>>> @@ -135,13 +146,31 @@ struct xenvif {
>>>>    pending_ring_idx_t pending_cons;
>>>>    u16 pending_ring[MAX_PENDING_REQS];
>>>>    struct pending_tx_info pending_tx_info[MAX_PENDING_REQS];
>>>> +  grant_handle_t grant_tx_handle[MAX_PENDING_REQS];
>>>>  
>>>>    /* Coalescing tx requests before copying makes number of grant
>>>>     * copy ops greater or equal to number of slots required. In
>>>>     * worst case a tx request consumes 2 gnttab_copy.
>>>>     */
>>>>    struct gnttab_copy tx_copy_ops[2*MAX_PENDING_REQS];
>>>> -
>>>> +  struct gnttab_map_grant_ref tx_map_ops[MAX_PENDING_REQS];
>>>> +  struct gnttab_unmap_grant_ref tx_unmap_ops[MAX_PENDING_REQS];
>>>
>>> I wonder if we should break some of these arrays into separate
>>> allocations? Wasn't there a problem with sizeof(struct xenvif) at one
>>> point?
>>
>> alloc_netdev() falls back to vmalloc() if the kmalloc failed so there's
>> no need to split these structures.
> 
> Is vmalloc space in abundant supply? For some reason I thought it was
> limited (maybe that's a 32-bit only limitation?)

It is limited in 32-bit, but 64-bit has stupid amounts.

/proc/meminfo:

VmallocTotal:   34359738367 kB

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.