📜 ⬆️ ⬇️

Nginx in the work of DevOps / Administrator. Dark side of power

In the work of DevOps / Administrators, there are often moments in which someone urgently needs to give someone access somewhere. Whether it's a docker instance, one of the many containers, or some kind of internal service.


Everyone knows about the capabilities of nginx in terms of traffic proxying, load balancing between servers, and other useful things that help to combine disparate services. However, the task of solving problems arising in the development process is much more extensive.
The main message of this article is to show a non-standard approach to seemingly simple things, such as providing temporary access to the inside of a closed segment.


Let's start with the simple.


You have a closed area within which various database instances, web services and so on are spinning. Usually, nginx is in front of all of these, which drives traffic.
Consider for example Galera Cluster for MySQL or MariaDB.


Galera Cluster is an online block replication system for multiple MySQL or MariaDB instances.
There are quite a few instructions for configuring the cluster, and each of them contains nginx with the stream module. The classic version of the configuration file is like this.


stream { error_log /var/log/nginx/mariadb-balancer-error.log info; upstream db { server 172.16.100.21:3306; server 172.16.100.22:3306; server 172.16.100.23:3306; server 172.16.100.24:3306 backup; server 172.16.100.25:3306 down; } server { listen 3306; proxy_pass db; proxy_connect_timeout 1s; # detect failure quickly } } 

On the host where nginx is installed, a listen socket is opened and incoming connections to this socket are serviced by the built-in nginx balancer. As we all understand in systems where docker or kubernetes is used, there is no rigid binding of the service to the host on which this or that container is executed. Then DNS rezolving comes into play and it all works fine until you need to connect to a specific node and a specific MySQL instance (MariaDB) bypassing the balancer. And in this case the static config is a very bottleneck.


What am I leading to?


Consider the option when we have different services inside the contour, but they should be available on the same port open on the firewall and nginx. Let's say we have three nodes on which there are three independent instances: Prod, Dev and Test with their ssh, mysql, nginx, apache, etc.


From the point of view of HTTP / HTTPS, there are no problems. I made a pack of virtual servers, zabindil for each desired domain name - the request came - the server name was determined from the request - the connection went to the correct instance. Profit!


Services ssh, mysql, postgresql and others do not know how to virtuosity and often, in order to organize a connection inside any instance, it is necessary to arrange a dance with a tambourine to open ports on a firewall, organize forwarding of packets, multihome source routing and so on.
Although the decision itself lies on the surface. Forward packets are a good thing, but it’s not very convenient to edit in the hands of the active development of the Firewall rule on the node that services incoming traffic.


Most nginx mechanisms work in location and do not work in stream.


However, there was a small loophole that allows you to optimize the process of dynamic connection to resources with minimal effort. For this you need a local dns resolver with a memcached or redis database as a database backend.


The essence is as follows. When initiating a client-side connection to the server, nginx will try to resolve the client's IP address in the ssh.local or mysql.local zone on the local resolver 127.0.0.1 listening on port 5353. The server’s dns response must be the server address inside the loop that you must forward connection . Nginx will do the rest of the work itself.


 stream { error_log /var/log/nginx/ssh-forward-error.log debug; resolver 127.0.0.1:5353 valid=10s; map $remote_addr $sshtarget { default $remote_addr.ssh.local:22; } map $remote_addr $mysqltarget { default $remote_addr.mysql.local:3306; } server { listen 2223; proxy_pass $sshtarget; proxy_connect_timeout 1s; # detect failure quickly } server { listen 33306; proxy_pass $mysqltarget; proxy_connect_timeout 1s; # detect failure quickly } } 

What to give by default, if the client's address is not found in the correspondence tables - decide the system administrator. You can return any non-existent address 127.0.0.2, etc.
I would like to draw your attention that the connection to a non-existent address will occur on behalf of nginx, so you must be careful when using this connection scheme. What to write dns server or use ready - everyone decides for himself. You can take a server on Python and add a query to the database with it in a couple of lines or take my old project on a pearl on which this configuration was tested.


Managing the contents of the database can be implemented on anything. The main thing to understand who should go at what point in time.


Actually why the "dark side of the force"? In essence, this scheme allows for the display of various information at different time intervals from different circuits providing this information with a single entry point. You will never know which block of information from which contour will be displayed at a specific point in time.


An established keepalive SSL tunnel will provide uninterrupted connection, and the missing, in auto-clear mode, from the dns database, the entry will not allow another host with the same IP to connect to your resources.


Let's say a closed corporate portal for those who have previously logged in to the management system or the system for administering the company's internal resources, forwarding any port to an internal server. In my case, this scheme is used for temporary forwarding of rdp connections to a specific computer on the network, according to the user's authorization, without using RDGateway. Sometimes customers need to get to a particular circuit inside a closed area, and this temporary access scheme works great.


I do not urge anyone to use this scheme on an ongoing basis, because it is still not safe enough, but at critical moments you will have the opportunity to influence the situation inside the circuit located behind nginx.


© Aborche 2017
Aborche


')

Source: https://habr.com/ru/post/336162/


All Articles