[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net v2 1/3] xen-netback: remove pointless clause from if statement



Thursday, March 27, 2014, 2:54:46 PM, you wrote:

>> -----Original Message-----
>> From: Sander Eikelenboom [mailto:linux@xxxxxxxxxxxxxx]
>> Sent: 27 March 2014 13:46
>> To: Paul Durrant
>> Cc: xen-devel@xxxxxxxxxxxxx; netdev@xxxxxxxxxxxxxxx; Ian Campbell; Wei Liu
>> Subject: Re: [PATCH net v2 1/3] xen-netback: remove pointless clause from if
>> statement
>> 
>> 
>> Thursday, March 27, 2014, 1:56:11 PM, you wrote:
>> 
>> > This patch removes a test in start_new_rx_buffer() that checks whether
>> > a copy operation is less than MAX_BUFFER_OFFSET in length, since
>> > MAX_BUFFER_OFFSET is defined to be PAGE_SIZE and the only caller of
>> > start_new_rx_buffer() already limits copy operations to PAGE_SIZE or less.
>> 
>> > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
>> > Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
>> > Cc: Wei Liu <wei.liu2@xxxxxxxxxx>
>> > Cc: Sander Eikelenboom <linux@xxxxxxxxxxxxxx>
>> > ---
>> 
>> > v2:
>> >  - Add BUG_ON() as suggested by Ian Campbell
>> 
>> >  drivers/net/xen-netback/netback.c |    4 ++--
>> >  1 file changed, 2 insertions(+), 2 deletions(-)
>> 
>> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
>> netback/netback.c
>> > index 438d0c0..72314c7 100644
>> > --- a/drivers/net/xen-netback/netback.c
>> > +++ b/drivers/net/xen-netback/netback.c
>> > @@ -192,8 +192,8 @@ static bool start_new_rx_buffer(int offset,
>> unsigned long size, int head)
>> >          * into multiple copies tend to give large frags their
>> >          * own buffers as before.
>> >          */
>> > -       if ((offset + size > MAX_BUFFER_OFFSET) &&
>> > -           (size <= MAX_BUFFER_OFFSET) && offset && !head)
>> > +       BUG_ON(size > MAX_BUFFER_OFFSET);
>> > +       if ((offset + size > MAX_BUFFER_OFFSET) && offset && !head)
>> >                 return true;
>> >
>> >         return false;
>> 
>> Hi Paul,
>> 
>> Unfortunately .. no good ..
>> 
>> With these patches (v2) applied to 3.14-rc8 it all seems to work well,
>> until i do my test case .. it still chokes and now effectively permanently 
>> stalls
>> network traffic to that guest.
>> 
>> No error messages or anything in either xl dmesg or dmesg on the host .. and
>> nothing in dmesg in the guest either.
>> 
>> But in the guest the TX bytes ifconfig reports for eth0 still increase but RX
>> bytes does nothing, so it seems only the RX path is effected)
>> 

> But you're not getting ring overflow, right? So that suggests this series is 
> working and you're now hitting another problem? I don't see how these patches 
> could directly cause the new behaviour you're seeing.

Don't know  .. how ever .. i previously tested:
        - unconditionally doing "max_slots_needed + 1"  in "net_rx_action()", 
and that circumvented the problem reliably without causing anything else
        - reverting the calculation of "max_slots_needed + 1"  in 
"net_rx_action()" to what it was before :
                int max = DIV_ROUND_UP(vif->dev->mtu, PAGE_SIZE);
                if (vif->can_sg || vif->gso_mask || vif->gso_prefix_mask)
                        max += MAX_SKB_FRAGS + 1; /* extra_info + frags */

So that leads me to think it's something caused by this patch set.
Patch1 could be a candidate .. perhaps that check was needed for some reason .. 
will see what not applying that one does

--
Sander


>   Paul

>> So it now seems i now have the situation which you described in the commit
>> message from "ca2f09f2b2c6c25047cfc545d057c4edfcfe561c",
>> "Without this patch I can trivially stall netback permanently by just doing a
>> large guest to guest file copy between two Windows Server 2008R2 VMs on a
>> single host."
>> 
>> --
>> Sander




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.