📜 ⬆️ ⬇️

LXC in QoS service (replace ifb with veth)

Probably anyone who has set up traffic prioritization in Linux has come across the fact that ifb’s ability to manage incoming traffic is quite scarce. Next, I will talk about a number of typical problems in building QoS, and how these problems were solved with the help of containers.

Why such problems?


It so happened that we can only control the speed of the network flow in the direction "to the output". What has already come to us on the interface has already passed many bottlenecks, and it would seem that there is no point in discarding or delaying this traffic. But, since most of the protocols (TCP and those that are built on top of UDP, or have their own implementation) have outbound flow control mechanisms that can take into account the actual client throughput and change the sending speed. In such a situation, it makes sense to manage the flow that goes to the customer on our side. There are a lot of mechanisms that implement them, I will consider a part of them.

Typical situations and problems


One interface to the Internet and a local network, no tunnels


The simplest and problem-free situation. Hereinafter, we will take for granted (for simplicity): the gateway itself does not generate traffic, it simply transmits it.
')
So, everything is simple, we hang queues on both interfaces, and we control the speed to the clients (incoming), on the interface, which looks at the local network (on its outgoing stream). Outgoing, respectively, on the external interface. Everything works, nothing interesting, go ahead.

Add another internal interface

Now everything is not so good. How to divide the "incoming" flow between internal networks? Queues of neighboring interfaces do not know anything about the actual load on the channel. Here you can divide the entire speed in half, and give half to each network. As a result, if the second network does not use the channel, and the second wants the maximum, it will receive only half of it ... Sadness, but ifb (or its almost deceased competitor imq) will help us.

The idea of ​​ifb is that we place a pseudo-interface in front of our real one, and thus begin to control the “incoming” flow of the real interface. Unfortunately, there is too much tar in this barrel. It turns out that ifb is so special that the iptables utility’s labeling and filtering mechanisms cannot be applied to it, only with tc. What's wrong with tc? And the fact that in terms of filtering and labeling, he knows very little and the syntax is completely different.

No matter how bad it is with tc, he will be able to solve the posed problem quite well, we will put it on the incoming interface, and we will remove the rules from the internal ones, they are no longer needed there.

Now connect two offices with a tunnel.

Now everything is getting really bad. We have a new interface that works as usual, but among other things, it "imperceptibly" uses the bandwidth of another interface.

Now we cannot efficiently distribute traffic, ifb will not help us here either. There remains a solution in the "forehead": select a fixed band for the tunnel, and manage the contents of the tunnel as a normal interface (by hanging on it ifb and regular queues).

Everything is “not bad”, only for nothing can we lose the speed given to the tunnel or to the normal network (how to look).

LXC will help us!


In general, the main idea is in the application of netns, but it is simpler to use the container (although it consumes more resources, mainly disk space).

So: we need an intermediate chain of interfaces between "external" and "internal" interfaces. It is on this chain that you can easily make the QoS of all traffic, and the maximum that we will lose from it is 4.5% (count the overhead, here I took IPSec \ GRE) into tunneling and encryption (to guarantee that everyone will receive the specified lane, we assume that all the traffic that comes \ goes to us, comes from the tunnel).

I think that everyone can create a container on the LXC, I will consider only some particulars that we may need.

So, in the configuration of the container, we need:

Add our real external interface to the container (we “sell” it to the container, so it will “disappear” from the main namespace):

lxc.network.type = phys lxc.network.flags = up lxc.network.link = enp3s6 

Or similarly for vlan:

 lxc.network.type = vlan lxc.network.flags = up lxc.network.link = enp2s0 lxc.network.vlan.id = 603 

Add an interface through which we will communicate with the main system (the script will raise the interface in the main system, the configuration must be set in advance):

 lxc.network.type = veth lxc.network.flags = up lxc.network.veth.pair = route0 lxc.network.name = eth1 lxc.network.hwaddr = 02:b2:30:41:30:25 lxc.network.script.up = /usr/bin/nmcli connection up route0 

Let's add automatic start and stop of the interfaces of the main system, when starting / stopping the container:

 lxc.hook.pre-start = /var/lib/lxc/route0/pre-start.sh lxc.hook.post-stop = /var/lib/lxc/route0/post-stop.sh 

/var/lib/lxc/route0/pre-start.sh
 #!/bin/sh /usr/bin/nmcli connection down vlan603 >/dev/null 2>&1 exit 0 


/var/lib/lxc/route0/post-stop.sh
 #!/bin/sh /usr/bin/nmcli connection up vlan603 >/dev/null 2>&1 exit 0 


Add the ability to use tun \ tap interfaces:

 lxc.hook.autodev = /var/lib/lxc/route0/autodev.sh 

/var/lib/lxc/route0/autodev.sh
 #!/bin/bash cd ${LXC_ROOTFS_MOUNT}/dev mkdir net mknod net/tun c 10 200 chmod 0666 net/tun 


Let us manage some routing parameters (who knows how to make narrower permissions, I ask for a hint):

 lxc.mount.auto = proc:rw sys:ro 

If necessary, make the container autorun.

In the main system, we need a profile for the route0 interface, which will appear when the container starts, I assume that you are using NetworkManager.

What you need to configure more:


Now we will simply configure the queues on the outgoing flows of interfaces connecting the main system with the container. Now, there are no problems with determining the destination of traffic, and we can use the entire available band from our provider (taking into account the% of the overhead from the tunnels).

PS The era of "basins" that distribute the Internet under Linux is rapidly leaving, the cost of hardware routers is becoming more attractive, and their power and flexibility is not much less than the "big" brothers, the same Mikrotik will solve this "problem" without much trouble.

Source: https://habr.com/ru/post/309124/


All Articles