
After reviewing most of the thematic posts on Habré, I was immensely surprised by the fact that the topic of using Unix / Linux in the service of Internet service providers was extremely poorly covered. This article, I partially try to fill this gap.
Why on the Internet there is a complete lack of such articles is not difficult to guess - everyone who uses Linux / FreeBSD in the ISP is immediately accused of being poor and is advised to buy Cisco or for a very extreme case of Juniper. That is why the second goal of this article is to show the reader that some technical solutions based on Linux OS in many respects are orders of magnitude superior to brand solutions from the most famous vendors.
')
Shaping
Our first experience of “non-standard” use of Linux appeared immediately after the launch of the provision of broadband access services for individuals. It was necessary to cut the external channel of each of our users. Here, due to the lack of own developments on this topic, we had to reinvent our own bicycle using
cbq and our own strapping to it. This scheme worked for a couple of months until we realized all its minuses and did not rest on the performance of the machine.
The fact is that the system started to “eat up” too many soft interrupts even with not much traffic, on a vizkadku with transit traffic of 300 megabits and 30 kpps with 1000 cbq linear rules (2 rules of input / output per user) on each interface in top si reached 100%.
If at the moment we would have faced the same task with those de technical tools, we would solve it with the help of Linux
htb tc +
hash filters .
NAT
Since at that time we were a small local home provider, when connecting subscribers of individuals, we were urgently asked whether to give the client a “white” routable ip address, or limit it to issuing “gray” ip addresses.
We stopped at the "gray" addresses, because when they were used, so much valuable material as real addresses at that time was significantly saved. Also, the security and comfort of our users increased somewhat, since from the outside their computers were not accessible to the entire Internet "directly."
For
NAT , Cisco equipment was chosen, in particular, Cisco ASA 5505 - at that time its capacity was enough to cover the needs of our customers.
At the same time, suddenly, there was information that the Cisco ASA ordered by us was somewhat delayed, and the actual question was how the stream came from the NAT 100-Mbit / s.
“On the knee” a test bench was assembled from an ordinary office PC with 2 network gigabit adapters and it turned out that with a little tuning this very simple “pisyuk” with the most ordinary realtecs is able to stop the flow we need.
After replacing the hardware, 1800 Mb / s passed through one of our NAT servers in peaks (yes, yes, this is not a mistake, traffic is almost 2 gigabit / s) with a relatively small load on the system.
NETFLOW
Faced with the problem of collecting statistics on the performance of home users before NAT, that is, with gray addresses, we arrived at the simplest scheme for obtaining
NETFLOW statistics.
We collected a scheme in which we “expel” a copy of all user traffic (SPAN PORT) to the required network ports of the server with Linux OS on board, and then use ipt_NETFLOW to create the FLOWS stream to the required server.
In more detail together with configs the scheme of work is given
here .
PS We are aware that most Cisco equipment could pour the already formed NETFLOW stream into the specified Netflow collector, but in our network diagram at that time there simply could not be such equipment :)
Termination of user networks.
Initially, I wanted to give the user an ip address, a subnet mask, and a gateway and not load it with PPPoE, PPTP, VPN settings, which ultimately had to unload the technical support service (which happened in practice), as the network setup became quite trivial in any custom OS.
Deciding to apply our previous experience using Linux, we came up with the following scheme, installing Linux servers with a pair of four-port network adapters at key locations on the network, one link “goes” towards the network core, the rest towards “clusters”. As a result, a heap of
VLANs with several networks in each of them is raised on each interface.
In total, we had 4 servers for approximately 10k subscribers on each network.
Peak traffic reached by each server during peak hours was striving for one and a half megapackets per second. The servers exchanged with each other over the
ospf protocol.
Blocking user access was carried out with the help of ipset.
Border
I would like to finish here on this happy note, but I would like to write about another “non-standard” use of Linux - as a border. It so happened that we had a Cisco ASR that was acting as a border to which 2 full views from two uplinks came down.
Here follows a small lyrical digression. Cisco has 100% kept its commitments and sent a replacement within a few hours after filling out the necessary documents, but as you understand, customers will not wait a day for a new iron to arrive in our region. The decision was spontaneous.
From the warehouse they took the server and installed Linux +
quagga on it and safely installed Cisco in place of the failed one.
At rush hour, this engineering marvel "chewed" the incoming flow of 1.4 Gbit / s, with a total kpps on all interfaces of the order of 400.
PS In the course of our work, we collected and tested many RPM packages for the CentOS 5 distribution, here are just a small list of them:
- ipset
- connlimit
- conntrack-tools
- ipt_netflow
- flow-tools
- quagga
You can download them from
this repository.
PPS If you have your own ideas or notes about using * nix like OS in the service of an ISP, you are welcome.
An article by a CentALT user, unfortunately, he grabs some karmas for posting himself, so please leave all the pros / cons in his karma