[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 5/6] xen-netback: coalesce slots before copying



On Mon, Mar 25, 2013 at 3:13 PM, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
> On 25/03/13 11:08, Wei Liu wrote:
>> This patch tries to coalesce tx requests when constructing grant copy
>> structures. It enables netback to deal with situation when frontend's
>> MAX_SKB_FRAGS is larger than backend's MAX_SKB_FRAGS.
>>
>> It defines max_skb_slots, which is a estimation of the maximum number of 
>> slots
>> a guest can send, anything bigger than that is considered malicious. Now it 
>> is
>> set to 20, which should be enough to accommodate Linux (16 to 19).
>
> This maximum needs to be defined as part of the protocol and added to
> the interface header.
>

No, this is not part of the protocol and not a hard limit. It is
configurable by system administrator.

>> Also change variable name from "frags" to "slots" in netbk_count_requests.
>
> I think this renaming should have been done as a separate patch.
>

The frag -> slot thing only make sense after this patch. If I do this,
I will need to partially reverted what I've done in a previous patch
(like the frag -> slot in comment below you pointed out). :-(

>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -47,11 +47,26 @@
>>  #include <asm/xen/hypercall.h>
>>  #include <asm/xen/page.h>
>>
>> +/*
>> + * This is an estimation of the maximum possible frags a SKB might
>> + * have, anything larger than this is considered malicious. Typically
>> + * Linux has 16 to 19.
>> + */
>
> Do you mean "max possible /slots/" a packet might have?
>

Yes.

>> @@ -968,48 +987,114 @@ static struct gnttab_copy 
>> *xen_netbk_get_requests(struct xen_netbk *netbk,
> [...]
>> +                             /* Poison these fields, corresponding
>> +                              * fields for head tx req will be set
>> +                              * to correct values after the loop.
>> +                              */
>> +                             netbk->mmap_pages[pending_idx] = (void 
>> *)(~0UL);
>> +                             pending_tx_info[pending_idx].head =
>> +                                     INVALID_PENDING_RING_IDX;
>
> Do you need to poison both values?
>

Just for safety. I have BUG_ON in the release path to check for possible error.

>> +
>> +                             if (unlikely(!first)) {
>
> This isn't unlikely is it?
>

For big packet the chance is 1 in max_skb_slots, so 5% (1/20) in default case.

>> +                                     first = &pending_tx_info[pending_idx];
>> +                                     start_idx = index;
>> +                                     head_idx = pending_idx;
>> +                             }
>
> Instead of setting first here why not move the code below here?
>

Because first->req.size needs to be set to dst_offset, it has to be
here anyway. So I put other fields here as well.

>> +             first->req.offset = 0;
>> +             first->req.size = dst_offset;
>> +             first->head = start_idx;
>> +             set_page_ext(page, netbk, head_idx);
>> +             netbk->mmap_pages[head_idx] = page;
>> +             frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
>
>> @@ -1548,13 +1649,34 @@ static void xen_netbk_idx_release(struct xen_netbk 
>> *netbk, u16 pending_idx,
> [...]
>> +             /* Setting any number other than
>> +              * INVALID_PENDING_RING_IDX indicates this slot is
>> +              * starting a new packet / ending a previous packet.
>> +              */
>> +             pending_tx_info->head = 0;
>
> This doesn't look needed.  It will be initialized again when reusing t
> his pending_tx_info again, right?
>

Yes, it is needed. Otherwise netback responses to invalid tx_info and
cause netfront to crash before coming into the re-initialization path.


Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.