📜 ⬆️ ⬇️

OpenVZ + venet + vlan / addresses from different networks

This post is dedicated to assigning OpenVZ containers addresses from different networks on the venet interface. I decided to write this post because I saw how other specialists solved this problem in obscene ways or refused to use venet at all.

Environment


We have an openvz host with containers that need to assign real addresses and addresses on the internal network. Moreover, the container can have either real, or internal, or both addresses at once. Real addresses and internal network are accessible from the host through different network segments. In my example, they are in different vlans, and the internal network is represented by a block of addresses 192.168.1/24 .
OS hosts - CentOS 6.

Problem


If you just add an internal and external address to the container, then the network in the containers will not work as it should: for all assignments the first address will always be selected as outgoing, and when choosing a route on the host, a routing table that is unsuitable for the internal (external) network will be used .
Openvz documentation and mailings tend to use veth for such a network configuration. However, I try not to use veth for the following reasons:

Therefore, the game is worth the candle and it would be desirable even in this configuration to have a network on the containers on the venet-interfaces.
')

Decision


As mentioned above, the problem with venet-interfaces is that they all come together, and when sending, you need to distinguish outgoing addresses and select different mashraty for them. Administrators solve this problem in different ways by slipping their rules into the side of the container or changing the pattern of adding an address to the container in the ifup scripts vz. I believe that the network configuration should be outside the container so that there are no problems during the migration.

Select the desired route on the host

The action takes place in /etc/sysconfig/network-scripts , unless otherwise specified.
Internal Interface:
 [root@pve1 network-scripts]# cat ifcfg-vmbr0 DEVICE="vmbr0" BOOTPROTO="static" IPV6INIT="no" DOMAIN="int" DNS1="192.168.1.15" DNS2="192.168.1.17" GATEWAY="192.168.1.3" IPADDR="192.168.1.142" NETMASK="255.255.255.0" ONBOOT="yes" TYPE="Bridge" 

Front end:
 [root@pve1 network-scripts]# cat ifcfg-vmbr1 DEVICE="vmbr1" BOOTPROTO="static" IPADDR=200.200.100.6 NETMASK=255.255.254.0 IPV6INIT="no" TYPE="Bridge" 

We define two additional routing tables - for the release of virtualkat packages into the internal network and in the external. The last two lines in the file below give names to two randomly taken free numbers for the routing tables:
 [root@pve1 network-scripts]# cat /etc/iproute2/rt_tables # # reserved values # 255 local 254 main 253 default 0 unspec # # local # #1 inr.ruhep 200 external 210 internal 

Define the contents of these tables:
 [root@pve1 network-scripts]# cat route-vmbr0 192.168.1.0/24 dev vmbr0 table internal default via 192.168.1.3 table internal [root@pve1 network-scripts]# cat route-vmbr1 200.200.100.0/23 dev vmbr1 table external default via 200.200.100.30 table external 

Both direct subnets and the Internet are accessible from both internal and external addresses, while the internal network is connected to the Internet through a NAT gateway ( 192.168.1.3 ). As we need.
Now it is necessary to divide, when what rules of routing to apply. By the files themselves it is difficult to compile an understanding, I will add comments below.
 [root@pve1 network-scripts]# cat rule-vmbr0 from 192.168.1.0/24 iif venet0 lookup internal from 192.168.1.0/24 to 192.168.1.0/24 iif venet0 lookup main 

 [root@pve1 network-scripts]# cat rule-vmbr1 from 200.200.100.0/23 iif venet0 lookup external from 200.200.100.0/23 to 200.200.100.0/23 iif venet0 lookup main 

These PBR rules are added bottom-up, so the second line is the first and vice versa, and the rules themselves are associated with the interface only by adding when this interface rises. Result table of rules:
 [root@pve1 network-scripts]# ip ru li 0: from all lookup local 32762: from 200.200.100.0/23 to 200.200.100.0/23 iif venet0 lookup main 32763: from 200.200.100.0/23 iif venet0 lookup external 32764: from 192.168.1.0/24 to 192.168.1.0/24 iif venet0 lookup main 32765: from 192.168.1.0/24 iif venet0 lookup internal 32766: from all lookup main 32767: from all lookup default 

Here you can see that everything from the external network received through venet is routed according to the rules of the external network (external table). One or several addresses from the external network 200.200.100.0/23 can be picked up on a neighboring virtual machine on the same machine and then you need to contact it not through the physical interface, but also through the virtual one. Therefore, in the case of sending from 200.200.100.0/23 to 200.200.100.0/23 I rely on the main routing table, where openvz adds the appropriate /32 routes, and in which there is a route 200.200.100.0/23 through the physical interface for everything else.
Similarly for the internal network.

Now our hostnay is able to understand that it’s not necessary to extinguish immediately to the Internet with packages from a gray network and everything else in the same spirit.

We prompt the container to choose which outgoing address

It's simple, you can add not only / 32 addresses in the container to the venet, but also addresses with the indicated subnet mask. This gives the kernel a hint that the addresses from this block are adjacent, and that when sending to them it is preferable to use such src:
 [root@pve1 network-scripts]# fgrep IP /etc/vz/conf/138.conf IP_ADDRESS="200.200.100.12/23 192.168.1.100/24" 

 [root@pve1 network-scripts]# vzctl exec 138 ip r 192.168.1.0/24 dev venet0 proto kernel scope link src 192.168.1.100 200.200.100.0/23 dev venet0 proto kernel scope link src 200.200.100.12 default dev venet0 scope link 

By default, the first address will be selected. Thus, if I swap the addresses in the container config, the container will try to select the internal IP and the host will send it to the Internet through the NAT gateway.

Conclusion


In my opinion, venet is the strength of OpenVZ and, if possible, it’s best to use it. The solution above allows the container to use network addresses in a way that abstracts from the network configuration.
In addition, I hope that in addition to the main purpose of the post will serve someone else and an illustration of the use of policy based routing in Linux.

Source: https://habr.com/ru/post/231497/


All Articles