📜 ⬆️ ⬇️

Using Open vSwitch with DPDK to transfer data between virtual machines with virtualization of network functions

The Data Plane Development Kit (DPDK) provides high-performance packet processing libraries and user-space drivers. Starting with Open vSwitch (OVS) version 2.4, we have the opportunity to use the vHost optimized with DPDK path in OVS. DPDK support was available from OVS version 2.2.

The use of DPDK in OVS provides significant advantages in terms of performance. As in other applications using DPDK, network bandwidth (the number of transmitted network packets) increases dramatically, with a significant reduction in delays. In addition, the performance of some of the most important OVS segments has been optimized using the DPDK packet processing libraries.

In this document, we will look at configuring OVS with DPDK for use between virtual machines. Each port will be connected to a separate virtual machine. Then we run a simple iperf3 bandwidth test and compare the performance with the operation of the OVS configuration without DPDK, in order to evaluate what advantages we get in OVS with DPDK.


')
Open vSwitch can be installed using standard package installers on common Linux * distributions. DPDK support is not enabled by default, so before proceeding, you need to build an Open vSwitch with DPDK.

Detailed instructions for installing and using OVS with DPDK can be found here . In this document, we will look at the main steps and, in particular, the scenario of using custom DPDK vhost-user ports.

Requirements for OVS and DPDK


Before compiling DPDK and OVS, make sure that all necessary requirements are met .

Software development packages in standard Linux distributions usually meet most of these requirements. For example, on yum (or dnf based) distributions, you can use the following command to install:

yum install "@Development Tools" automake tunctl kernel-tools "@Virtualization Platform" "@Virtualization" pciutils hwloc numactl 

In addition, make sure that the qemu component of version v2.2.0 or later is installed on the system as described in the DPDK vhost-user Prerequisites document.

Build the target DPDK environment for OVS


To build OVS with DPDK, you need to download the source DPDK code and prepare the destination environment. For more information about using DPDK, see here . The main actions are shown in the following code snippet:

 curl -O http://dpdk.org/browse/dpdk/snapshot/dpdk-2.1.0.tar.gz tar -xvzf dpdk-2.1.0.tar.gz cd dpdk-2.1.0 export DPDK_DIR=`pwd` sed 's/CONFIG_RTE_BUILD_COMBINE_LIBS=n/CONFIG_RTE_BUILD_COMBINE_LIBS=y/' -i config/common_linuxapp make install T=x86_64-ivshmem-linuxapp-gcc cd x86_64-ivshmem-linuxapp-gcc EXTRA_CFLAGS="-g -Ofast" make -j10 

Build OVS with DPDK


If you have a DPDK destination environment assembled, you can download the latest OVS source code and build with DPDK support enabled. Standard documentation for building OVS with DPDK is available at . Here we consider only the basic steps.

 git clone https://github.com/openvswitch/ovs.git cd ovs export OVS_DIR=`pwd` ./boot.sh ./configure --with-dpdk="$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/" CFLAGS="-g -Ofast" make 'CFLAGS=-g -Ofast -march=native' -j10 

So, we have a fully assembled OVS with DPDK support enabled. All standard OVS utilities are located in the $ OVS_DIR / utilities / folder, and the OVS database is located in the $ OVS_DIR / ovsdb / folder. We use these utilities for further action.

Creating the OVS database and starting the ovsdb-server


Before starting the main OVS process “ovs-vswitchd”, you need to initialize the OVS database and start the ovsdb-server. The following commands show how to clean up and create a new OVS database and an ovsdb_server instance.

 pkill -9 ovs rm -rf /usr/local/var/run/openvswitch rm -rf /usr/local/etc/openvswitch/ rm -f /usr/local/etc/openvswitch/conf.db mkdir -p /usr/local/etc/openvswitch mkdir -p /usr/local/var/run/openvswitch cd $OVS_DIR ./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db ./vswitchd/vswitch.ovsschema ./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach ./utilities/ovs-vsctl --no-wait init 

Configure the host and network adapters to use OVS with DPDK


DPDK requires the host operating system to support ultra-large memory pages, and polled mode (PMD) DPDK user-space drivers must be enabled for network adapters.
To enable extra large memory pages and use the VFIO user-space driver, add the following parameters in GRUB_CMDLINE_LINUX to / etc / default / grub, then run the grub update and reboot the system:

 default_hugepagesz=1G hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=2048 iommu=pt intel_iommu=on isolcpus=1-13,15-27 grub2-mkconfig -o /boot/grub2/grub.cfg reboot 

Depending on the amount of available memory in the system, you can configure the number and type of extra-large pages. The isolcpus option allows you to isolate certain CPUs from the Linux scheduler, so they can be “pinned down” by DPDK based applications.
After rebooting the system, check the kernel command line and the selected extra-large pages, as shown below.



Now you need to connect the file system of extra large pages and load the vfio-pci user space driver.
 mkdir -p /mnt/huge mkdir -p /mnt/huge_2mb mount -t hugetlbfs hugetlbfs /mnt/huge mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB modprobe vfio-pci cp $DPDK_DIR/tools/dpdk_nic_bind.py /usr/bin/. dpdk_nic_bind.py --status dpdk_nic_bind.py --bind=vfio-pci 05:00.1 

The following screen capture shows sample output for the above commands.



If the intended use case only concerns data transfer between virtual machines and physical network adapters are not used, then you can skip the above steps for vfio-pci.

Run ovs-vswitchd


So, the OVS database is configured, the host is configured to use OVS with DPDK. Now you should start the main process ovs-vswitchd.

 modprobe openvswitch $OVS_DIR/vswitchd/ovs-vswitchd --dpdk -c 0x2 -n 4 --socket-mem 2048 -- unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach 

Creating a bridge and DPDK vhost-user ports for use between virtual machines


In our test sample, we will create a bridge and add two DPDK vhost-user ports. If desired, you can add the physical network adapter vfio-pci, which we configured earlier.

 $OVS_DIR/utilities/ovs-vsctl show $OVS_DIR/utilities/ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev $OVS_DIR/utilities/ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk $OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser $OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser 

The following screen capture shows the final OVS configuration.



Using DPDK vhost-user ports with virtual machines


The description of creating virtual machines is beyond the scope of this document. After we have two virtual machines (for example, f21vm1.qcow2 and f21vm2.qcow2), the following commands will show how to use the previously created DPDK vhost-user ports.

 qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda ~/f21vm1.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \ -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 \ -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \ -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \ -numa node,memdev=mem -mem-prealloc qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda ~/f21vm2.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \ -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user2 \ -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \ -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1 \ -object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \ -numa node,memdev=mem -mem-prealloc 

A simple DPDK vhost-user performance test between virtual machines using iperf3


Log on to virtual machines and configure static IP addresses for the network adapters on the same subnet. Install iperf3 and run a simple network test.
On one virtual machine, run iperf3 in server mode iperf3 -s and start the iperf3 client. An example of the result is shown in the following screen shot.



Repeat performance test for standard OVS assembly (without DPDK)


In the previous sections, we created and used the OVS-DPDK assembly directly in the $ OVS_DIR folder, without installing it into the system. To repeat the test for the standard OVS assembly (without DPDK), you can simply install using standard distribution installers. For example, on systems based on yum (or based on dnf), you can use the following command to install:

 pkill -9 ovs yum install openvswitch rm -f /etc/openvswitch/conf.db mkdir -p /var/run/openvswitch ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema ovsdb-server --remote=punix:/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach ovs-vsctl --no-wait init ovs-vswitchd unix:/var/run/openvswitch/db.sock --pidfile --detach ovs-vsctl add-br br0 ovs-vsctl show 

At this stage, we have a configured fresh OVS database and a running ovs-vswitchd process without DPDK.
For instructions on setting up two virtual machines with listening devices for an OVS bridge without DPDK (br0), see the instructions . Then start the virtual machines using the same images that we used before, for example:

 qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda ~/f21vm1c1.qcow2 -boot c -enable-kvm -no-reboot -nographic -net nic,macaddr=00:11:22:EE:EE:EE -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda ~/f21vm1c2.qcow2 -boot c -enable-kvm -no-reboot -nographic -net nic,macaddr=00:11:23:EE:EE:EE -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown 

Repeat the simple iperf3 performance test that we performed earlier. Below is an example of the results; actual results on your system may vary depending on its configuration.



As can be seen in the figure above, when using OVS with DPDK, there is a significant performance increase. Both performance tests were performed on the same system, the only difference was that in one case the standard OVS assembly was used, and in the other case OVS with DPDK was used.

Conclusion


Open vSwitch version 2.4 supports DPDK, which means a very significant performance boost. In this article, we showed how to build and use OVS with DPDK. We looked at setting up a simple OVS bridge with DPDK vhost-user ports for use between virtual machines. We demonstrated improved performance with the iperf3 test by comparing OVS with DPDK and without DPDK.

Source: https://habr.com/ru/post/280502/


All Articles