[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] Bug (and fix?) for scripts/setup-vif-rules


  • To: 'Kevin Tower' <ktower@xxxxxxxxxxxxxxx>, "xen-api@xxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxx>
  • From: Rob Hoes <Rob.Hoes@xxxxxxxxxx>
  • Date: Tue, 30 Apr 2013 11:11:24 +0100
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Delivery-date: Tue, 30 Apr 2013 10:11:46 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>
  • Thread-index: Ac5FhOZ8/L4oMW6eTOGt3KPKvRc/0QABastA
  • Thread-topic: [Xen-API] Bug (and fix?) for scripts/setup-vif-rules

Hi Kevin,

 

Thanks for the report. Your analysis of the problem sounds correct. I did not realise that this was not working on VLANs.

 

If you could create a pull request to the xen-api repo on github that would be great. Feel free to send questions to the list if you need help with this.

 

Cheers,

Rob

 

From: xen-api-bounces@xxxxxxxxxxxxx [mailto:xen-api-bounces@xxxxxxxxxxxxx] On Behalf Of Kevin Tower
Sent: 30 April 2013 5:51 AM
To: xen-api@xxxxxxxxxxxxx
Subject: [Xen-API] Bug (and fix?) for scripts/setup-vif-rules

 

Hello,

 

My name is Kevin Tower and I am a systems engineer and self-proclaimed virtualization evangelist at the University of Washington.  We make use of the downstream fork of XCP -- XenSever, but I use XCP in some of my personal dev environments.  We have been attempting to make use of the "port locking" functionality that was added in 6.1 / 1.6, but I believe it to be broken.  Details of the issue follow, as well as a proposed fix.

 

I see that George Shuklin made a commit to the code 3 months ago (see https://github.com/xen-org/xen-api/commit/06c2d0fedc7031c27ad9215a751c404fde1ebb70 ) that made changes to the script regarding this feature, but my tests have shown that this change is insufficient to handle all use cases.

 

When attempting to use the port-locking features on a VIF that is connected to a "normal" network (no VLAN tags), the logic works correctly.  However, It breaks when a VLAN-tagged network comes into play.  What happens is that the VLAN network is added to the network as a "fake bridge," to use the vSwitch terminology, which is a child object to the primary bridge device.  When a VIF is attached to the (VLAN-tagged) network, it is associated with this fake bridge instead of the real bridge.  So, when this script is called against such a VIF, the get_bridge_name_vswitch() function returns the name of the fake bridge instead of the real one.

 

The problem is that most of the ovs-* Open vSwitch utilities, including the ovs-ofctl that is used to set up the VIF filters, don't recognize the fake bridge as a valid bridge device and fail, and the port locking rules never get added.  Also, the way this script is written, these failures are silent because the return code is not captured, but I have not done anything about that.

 

I have tested a fix that appears to work in both the VLAN and the non-VLAN case.  Roughly described, after getting a bridge device name by executing "ovs-vsctl iface-to-br vif_name", I run  "ovs-vsctl br-to-parent bridge_device" and return that instead.  If the bridge device is a fake bridge, it returns the "real" bridge device name, but if it is already a real bridge device, it returns the same device name that was passed as a parameter.

 

I am somewhat unfamiliar with the process for contributing bug fixes to an open source project (if this fix is accepted, it will be my first!), and I am also pretty new to git (I've used other source control systems, though).  What is the best way for me to provide my suggested fix?  Should I post the .diff here on the mailing list?  Should I create a pull request from my own fork of the xen-api repository on github that has the suggested code changes?

 

Thanks in advance,

 

Kevin Tower

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.