4 minute read

Set Up an Open Virtual Switch

Open Virtual Switch or Open vSwitch supports flows, VLANs, trunking and port aggregation just like other major league switches. Now, discover how to set up an Open vSwitch on an Ubuntu 12.04 system.

Open Virtual Switch (OVS) is an open source Layer 2 virtual switch, which offers many Layer 2 features and even supports some Layer 3 protocols. It is one of the key components behind many public and private cloud offerings.

Advertisement

As with any switch, OVS also has two paths—a fast data path and a decision-making control path.

Control path: This is in user space and creates a table with decisions.

Data path: This is in the kernel space; packets flow through this path based on decisions made by the control path.

Working

Whenever a packet arrives, the data path checks for the existing flows, and if there are no matching flows, the packet will be sent to the user space controller along with a flow key to generate the flow. The flow key describes a packet in general terms.

Sample flow key

in_port(1), eth(src=e0:19:2f:10:0a:ba, dst=00:21:f3:f1:89:a1), eth_type(0x0800), ipv4(src=10.168.122.3 dst=10.168.122.4, proto=17, tos=0, frag=no), tcp(src=31639, dst=8080)

Now, based on the flow key, the controller will either neglect or generate a new flow to the kernel space. This flow will remain in the kernel space for the next 5 seconds, and if there is no matching flow within that time span, it will expire.

Figure description

As represented by the ------ line, the first packet will go through the controller and all subsequent packets will go through the data path.

VM1 Data Path

VM2 vif1.0 bridge1

vif2.0

Controller

Hypervisor

Figure 1: Packet flow

Features

Listed below are some of the features of OVS: ƒ QOS ƒ Port mirroring ƒ NIC bonding ƒ VLAN ƒ GRE support

Installing OVS

To install on Ubuntu 12.04, give the following command:

apt-get install openvswitch-switch

After installing OVS, check the kernel module by using the following command:

lsmod|grep openvswitch

To view the version number, use the code below:

ovs-vsctl -V ovs-vsctl (Open vSwitch) 1.4.0+build0 Compiled Feb 18 2013 13:13:22

Create the first bridge with the following command:

ovs-vsctl add-br bridge1

Add the virtual interface for virtual machines to the bridge:

ovs-vsctl add-port bridge1 vif1.0 ovs-vsctl add-port bridge1 vif2.0

...where vif1.0 is the virtual interface of vm1 with IP 10.168.122.3, and vif2.0 is the virtual interface of vm2 with IP 10.168.122.4

You can check the connectivity as follows:

VM1

10.168.122.3

VM2

10.168.122.4

Figure 2: Block diagram

ping 10.168.122.4 -c 10 --- 10.168.122.4 ping statistics --10 packets transmitted, 10 received, 0% packet loss, time 8998ms rtt min/avg/max/mdev = 0.140/0.243/0.615/0.130 ms

The corresponding data path flows are given below:

ovs-dpctl dump-flows bridge1

in_port(2),eth(src=52:9a:13:f4:90:c8,dst=ae:0a:5e:8a:cc:2b) ,eth_type(0x0800),ipv4(src=10.168.122.4,dst=10.168.122.3,pr oto=1,tos=0,ttl=64,frag=no),icmp(type=0,code=0), packets:9, bytes:882, used:0.690s, actions:1

in_port(1),eth(src=ae:0a:5e:8a:cc:2b,dst=52:9a:13:f4:90:c8) ,eth_type(0x0800),ipv4(src=10.168.122.3,dst=10.168.122.4,pr oto=1,tos=0,ttl=64,frag=no),icmp(type=8,code=0), packets:9, bytes:882, used:0.690s, actions:2

The above result is almost self-explanatory, where in_port can be seen from:

ovs-dpctl show

system@bridge1: lookups: hit:42 missed:26 lost:0 flows: 0 port 0: bridge1 (internal) port 1: vif1.0 port 2: vif2.0

Using VLAN for traffic isolation

Traffic between virtual machines can be isolated using VLAN. Openstack Neutron uses this feature of OVS to offer traffic isolation between different tenants by extending it over GRE tunnels.

ovs-vsctl set port vif2.0 tag=2

This isolates vm2 traffic from other vms with a VLAN tag of 2.

A few known issues with Open Virtual Switch

OVS is engineered to offer many more features compared to native Linux bridges, but these come with a small performance penalty.

Since for every new connection, OVS needs flow (which is handled by user space), this can result in heavy CPU utilisation during heavy network traffic. While the kernel waits for the flow from user space, there may be new connections with no existing flows, which will also be queued in user space. If the user space buffer is full, it will drop new packets.

This can be seen by the following code:

ovs-dpctl show

system@bridge1: lookups: hit:207571933 missed:226782808 lost:26742 flows: 0 port 0: bridge1 (internal) port 1: vif1.0

…where, ƒ Hit refers to packets matched with existing kernel flows; ƒ Missed refers to there being no existing flows—so the packet is sent to user space for flow generation; ƒ Lost occurs when there is not enough of a free buffer in user space and, hence, the packet gets dropped.

OVS 1.11 addresses this issue by introducing kernel wildcarding or mega flows. With this, only a single flow is required for multiple TCP/UDP ports and protocols, which is good enough to handle multiple connections, matching source and destination.

Even though this minimises the context switch and improves performance, it doesn’t completely solve high CPU usage—to address this issue, OVS 2.0 comes with multithreaded user space and many other improvements.

Acknowledgment

I would like to thank Rushikesh Jadhav for his help and appreciation.

By: Ananthakrishnan V R

The author works as an IaaS cloud administrator at the ESDS fully managed data center. He has more than three years of experience in open source virtualisation and Linux servers. His main areas of interest are server virtualisation, cloud security, server security, Linux, OpenStack and other open source technologies.

None

OSFY?

You can mail us at osfyedit@efyindia.com. You can send this form to ‘The Editor’, OSFY, D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563

This article is from: