[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen/OVS - VLAN offloading



Surely, 
by 'vnet' i meaned 'vconfig'.
-- 
Best regards,
Eugene Istomin


On Friday, May 31, 2013 01:20:06 PM Eugene Istomin wrote:
> Jesse from OVS maillist said:
> 
> "You are seeing the result of TSO not functioning in the presence of
> vlans. This is one of the other offloads that I was referring to
> before but it's not directly the result of the features that you
> showed. Regardless, this is a limitation of Xen and not something that
> OVS affects"
> 
> I go deeper in testbed VM vlans and find:
> 
> #ethtool -k vlan1002
> tx-checksum-ip-generic: off
> generic-segmentation-offload: off
> tx-nocache-copy: off
> tx-checksumming: off
> 
> #ethtool -K vlan1002 tx on
> Could not change any device features
> 
> Is linux have offload on vnet vlan interfaces like VXLAN currently have
> (http://lists.openwall.net/netdev/2013/02/16/2)?
> > On Fri, 2013-05-31 at 11:18 +0300, Eugene Istomin wrote:
> > > Ian,
> > > 
> > > in my testbed untagged by OVS have ~2 times more bandwith than untagged
> > > by
> > > VM.
> > > 
> > > All interfaces have MTU=9000
> > > 
> > > 
> > > 1)untagged by VM interface (in OVS like "trunks: [1002]")
> > > 
> > > #atop from VM
> > > NET | transport    | tcpi   22733 | tcpo   80191 | udpi       0 | udpo
> > > 
> > >   4 | NET | eth0    ---- | pcki   22736 | pcko   80243 | si   12 Mbps |
> > > 
> > > so 5777 Mbps | NET | vlan100 ---- | pcki   22738 | pcko   80245 | si
> > > 9495
> > > Kbps | so 5775 Mbps |
> > > 
> > > #atop from Dom0
> > > CPU | sys      57% | irq      39%
> > > cpu | sys      58% | irq      41%
> > > ..
> > > NET | vif1.0  ---- |  pcki  227727 | pcko  797502  | si   10 Mbps |  so
> > > 5743 Mbps NET | vif2.0  ---- |  pcki  797748 | pcko  227717  | si 5736
> > > Mbps |  so   12 Mbps
> > > 
> > > 
> > > 
> > > 2) untagged by OVS interface (in OVS like "tag: 1002")
> > > #atop from VM  - untagged by OVS interface
> > > NET | transport    | tcpi    8495 | tcpo  163131 | udpi       0 | udpo
> > > 
> > >   0 NET | eth1    ---- | pcki    8495 | pcko   24718 | si 4485 Kbps | so
> > >  
> > >  11 Gbps
> > > 
> > > #atop from Dom0
> > > CPU | sys      96% | irq       4%
> > > cpu | sys      96% | irq       4%
> > > ..
> > > NET | vif1.1  ---- |  pcki   75974 | pcko  247608  | si 3160 Kbps |  so
> > > 11 Gbps NET | vif2.1  ---- |  pcki  247616 | pcko   75971  | si   11
> > > Gbps
> > > 
> > > |  so 4011 Kbps
> > > 
> > > As you can see second variant have full netback sys load in DOM0. Second
> > > have high number of irq and high numbers of pcki/pcko.
> > > Is this behavior correct?
> > 
> > I'd have expected the second case to be lower overhead, which it is. I
> > would expect the first case to be higher overhead, which it is, but it
> > seems a lot higher than I would have handwavily expected -- I'm not sure
> > why vlan offload on the vif device should matter to that extent.
> > 
> > Wei, what do you think of implementing vif offload on the netback vif
> > devices? I don't necessarily mean over the wire protocol, although that
> > might be worth investigating separately, just at the netdev interface --
> > i.e. inserting the VLAN header into the ring as part of
> > xen_netbk_tx_build_gops() processing or whatever?
> > 
> > Ian.
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.