📜 ⬆️ ⬇️

Our recipe for a fault-tolerant VPN-server based on tinc, OpenVPN, Linux



One of our clients asked us to develop a fail-safe solution for organizing secure access to its corporate services. The decision was to:


There were no ready-made solutions satisfying all the set conditions. Therefore, we collected it on the basis of popular Open Source-products, and now we are happy to share the result obtained in this article.
')

Concept development


On the client side, we chose OpenVPN as the base VPN technology: it works perfectly through NAT and supports all the required platforms.

It was decided to deploy OpenVPN in the TLS server mode, and add and block users in it using the easy-rsa package, which allows you to create a key and a certificate, and then revoke them if necessary.

The hardest thing was to solve the issue of scaling, redundancy and fault tolerance.

The final solution was simple and elegant. We decided to use N input nodes whose addresses using round-robin DNS are given to clients. All nodes and customer service nodes are included in a single tinc VPN L2 space. Client connections (also L2) are combined with a tinc-interface into the bridge. Thus, it turns out that when connecting via OpenVPN, the client gets to a random node and ends up in a single L2 network with all other clients, nodes and client services.



To implement this scheme, 3 VPS were allocated in various data centers, where it was required to deploy "entry points" to the network ( ep1 , ep2 and ep3 ). In addition, a hypervisor with client services ( hpv1 ) was present on the network. Ubuntu Server 16.04 installed on all machines.

We build tinc VPN


First, install the packages:

 $ sudo apt-get update && sudo apt-get install tin 

At this stage, we need to decide on the name of the network - let it be l2vpnnet . Create a directory structure:

 $ sudo mkdir -p /etc/tinc/l2vpnnet/hosts 

Create the tinc.conf file in the /etc/tinc/l2vpnnet tinc.conf and fill it with the following contents:

 #    Name = ep1 #  ,    — L2 Mode = switch # ,     Interface = tap0 #     UDP Port = 655 #     ,      ConnectTo = ep2 ConnectTo = ep3 ConnectTo = hpv1 

Create the /etc/tinc/l2vpnnet/ep1 and /etc/tinc/l2vpnnet/ep1 parameters into it:

 #     Address = 100.101.102.103 655 #      Cipher = aes-128-cbc Digest = sha1 #        Compression = 0 

Produce key generation. Traditionally, we use keys of 2 kilobits long: this key length provides a good balance between the level of privacy and delays (due to the overhead of encryption).

 $ cd /etc/tinc/l2vpnnet && sudo tincd -n l2vpnnet -K2048 Generating 2048 bits keys: ............................................+++ p .................................+++ q Done. Please enter a file to save private RSA key to [/etc/tinc/l2vpnnet/rsa_key.priv]: Please enter a file to save public RSA key to [/etc/tinc/l2vpnnet/hosts/ep1]: 

On the rest of the machines doing the same thing. Files with a public key and connection parameters ( /etc/tinc/l2vpnnet/hosts/ep1|ep2|ep3|hpv1 ) should be placed with all members of the network in the /etc/tinc/l2vpnnet/hosts .

The name of the network must be entered into the /etc/tinc/nets.boot file in order for tinc to start the VPN to our network automatically upon boot:

 $ sudo cat nets.boot #This file contains all names of the networks to be started #on system startup. l2vpnnet 

When setting up both tinc VPN and OpenVPN in our company, it is customary to use standard Ubuntu network management mechanisms. Add a description of tap0 device parameters to /etc/network/interfaces :

 #       auto tap0 #    manual,   IP     bridge iface tap0 inet manual #     tinc pre-up ip tuntap add dev $IFACE mode tap # ...      post-down ip tuntap del dev $IFACE mode tap # ,  tinc     tinc-net l2vpnnet 

This setting will allow us to manage tinc using ifup / ifdown scripts.

For a single L2-space, one must also choose an L3-space. For example, we will use the network 10.10.10.0/24 . We will configure the bridge interface and assign it an IP - for this we enter in /etc/network/interfaces such information:

 auto br0 iface br0 inet static # , IP      address 10.10.10.1 netmask 255.255.255.0 # ,      tinc vpn bridge_ports tap0 #   spanning tree  bridge- bridge_stp off #      bridge_maxwait 5 #     bridge_fd 0 

After that, we start both devices on all servers and check connectivity with any diagnostic tool (ping, mtr, etc.):

 $ sudo ifup tap0 && sudo ifup br0 $ ping -c3 10.10.10.2 PING 10.10.10.2 (10.10.10.2) 56(84) bytes of data. 64 bytes from 10.10.10.2: icmp_seq=1 ttl=64 time=3.99 ms 64 bytes from 10.10.10.2: icmp_seq=2 ttl=64 time=1.19 ms 64 bytes from 10.10.10.2: icmp_seq=3 ttl=64 time=1.07 ms --- 10.10.10.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 1.075/2.087/3.994/1.349 ms 

Excellent: L2 space for input nodes and target server is built. Now you need to add remote clients to it.

Configuring OpenVPN


First, install the necessary packages on all servers:

 $ sudo apt-get update && sudo apt-get install openvpn easy-rsa 

Configure the DNS zone, add 3 A-records with the same name of the VPN service:

 vpn.compa.ny. IN A 100.101.102.103 vpn.compa.ny. IN A 50.51.52.53 vpn.compa.ny. IN A 1.1.1.1 

DNS will be the first load balancing mechanism in our system. According to the documentation, OpenVPN resolves the name of the connection point and will consistently make attempts to connect to all IPs to which the name resolves. DNS will then give a list of IP in random order.

The second load balancing mechanism will be the limit on the maximum number of connections per server. Suppose we have about 50 users. Taking redundancy into account, we will put a limit of 30 users on the server and distribute the pools of IP addresses as follows:

 Node 1 10.10.10.100-10.10.10.129 Node 2 10.10.10.130-10.10.10.159 Node 2 10.10.10.160-10.10.10.189 

Create an environment for CA:

 $ cd /etc/openvpn $ sudo -s # make-cadir ca # mkdir keys # chmod 700 keys # exit 

Now edit the file with vars variables, setting the following values:

 #   easy-rsa export EASY_RSA="`pwd`" #   openssl, pkcs11-tool, grep export OPENSSL="openssl" export PKCS11TOOL="pkcs11-tool" export GREP="grep" #  openssl export KEY_CONFIG=`$EASY_RSA/whichopensslcnf $EASY_RSA` #    export KEY_DIR="$EASY_RSA/keys" export PKCS11_MODULE_PATH="dummy" export PKCS11_PIN="dummy" #   export KEY_SIZE=2048 # CA-   10  export CA_EXPIRE=3650 #   : , , # , , e-mail   export KEY_COUNTRY="RU" export KEY_PROVINCE="Magadan region" export KEY_CITY="Susuman" export KEY_ORG="Company" export KEY_EMAIL="info@compa.ny" export KEY_OU="IT" export KEY_NAME="UnbreakableVPN" 

Save and start generating keys:

 # . vars # ./clean-all # ./build-ca Generating a 2048 bit RSA private key ..........................+++ .+++ writing new private key to 'ca.key' ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [RU]: State or Province Name (full name) [Magadan region]: Locality Name (eg, city) [Susuman]: Organization Name (eg, company) [Company]: Organizational Unit Name (eg, section) [IT]: Common Name (eg, your name or your server's hostname) [Company CA]: Name [UnbreakableVPN]: Email Address [info@compa.ny]: # ./build-dh Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time … # ./build-key-server server # openvpn --genkey --secret keys/ta.key 

Create a test user and immediately revoke his certificate to create a revocation list:

 # ./build-key testuser # ./revoke-full testuser 

Copy all the keys needed to configure the server to a directory with OpenVPN key information:

 # cd keys # mkdir /etc/openvpn/.keys # cp ca.crt server.crt server.key dh2048.pem ta.key crl.pem /etc/openvpn/.keys # exit 

Prepare the OpenVPN server configuration, for which we will create the /etc/openvpn/server.conf file:

 #     verb 4 #     port 1194 proto tcp-server #     mode server tls-server #  MTU tun-mtu 1500 #     ,     dev ovpn-clients dev-type tap # ,  TA-     key-direction 0 #    cert /etc/openvpn/.keys/server.crt key /etc/openvpn/.keys/server.key dh /etc/openvpn/.keys/dh2048.pem tls-auth /etc/openvpn/.keys/ta.key crl-verify /etc/openvpn/.keys/crl.pem #      auth sha1 cipher AES-128-CBC # , ,      #      persist-tun #      topology subnet server-bridge 10.10.10.1 255.255.255.0 10.10.10.100 10.10.10.129 #         #  DNS push "redirect-gateway autolocal" push "dhcp-option DNS 10.10.10.200" push "dhcp-option DNS 10.20.20.200" #       10 , #   — 2  keepalive 10 120 #     30  max-clients 30 #    openvpn user nobody group nogroup #       IP   float #     log /var/log/openvpn-server.log 

For the second and third servers we will use the same set of key information - the configuration files will differ only in the pool of issued IP addresses.

By analogy with tinc, we will configure OpenVPN control via standard ifup / ifdown scripts, adding a device description to /etc/network/interfaces :

 auto ovpn-clients iface ovpn-clients inet manual pre-up ip tuntap add dev $IFACE mode tap post-up systemctl start openvpn@server.service pre-down systemctl stop openvpn@server.service post-down ip tuntap del dev $IFACE mode tap 

We will enable the interface with the tinc bridge by changing the br0 interface settings:

  ... netmask 255.255.255.0 bridge_ports tap0 bridge_ports ovpn_clients bridge_stp off ... 

We give everything in working condition:

 $ sudo ifup ovpn-clients && sudo ifdown br0 && sudo ifup br0 

Server configuration is ready. Now we will create client keys and ovpn-file:

 $ sudo -s # cd /etc/openvpn/ca # ./build-key PetrovIvan # exit 

To simplify use, we will create a client ovpn-file with key information INLINE:

 $ vim PetrovInan.ovpn #   ,     client dev tap proto tcp #  MTU  ,     tun-mtu 1500 #      remote vpn.compa.ny 1194 #      nobind # ,         persist-key persist-tun #  MSS mssfix # ,    TA  TLS- key-direction 1 ns-cert-type server remote-cert-tls server auth sha1 cipher AES-128-CBC verb 4 keepalive 10 40 <ca> ###    ca.crt </ca> <tls-auth> ###    ta.key </tls-auth> <cert> ###    PetrovIvan.crt </cert> <key> ###    PetrovIvan.key </key> 

We save and give to the client, which simply connects to the VPN, using the ovpn-file. This completes the OpenVPN configuration.

Client lock


In the case when we need to deny connecting to a VPN to one of the clients (for example, when an employee leaves), we simply revoke the certificate:

 $ ./revoke-all PetrovIvan 

After the revocation, we update crl.pem on all servers and execute:

 $ sudo service openvpn reload 

Please note that the server.conf missing the persist-key option. This allows you to update key information during the execution of a reload - otherwise it would require a restart of the daemon.

To distribute the revocation list and perform the reload action for OpenVPN, we use Chef. Obviously, any other means of automatic deployment of configurations (Ansible, Puppet ...) or even a simple shell script are suitable for this purpose.

In addition, we placed a directory with CA in Git, which allowed both us and the client to work with key information, avoiding collisions.

Conclusion


Of course, the described solution develops during operation. In particular, we added simple scripts that automatically create client ovpn files during key generation, as well as working on a VPN monitoring system.

If you have thoughts about weak points in this decision or ideas / questions on the further development of the configuration, I will be glad to see their comments!

PS


Read also in our blog:

Source: https://habr.com/ru/post/338628/


All Articles