
Everyone knows that using ssh you can do port forwarding (create tunnels). You could also learn from the ssh manual that OpenSSH can dynamically open ports for remote redirection and execute well-defined commands. Also, everyone knows that for Ansible (not counting Tower) there is no such thing as a server and a client (in the sense of ansible-server / ansible-agent) - there is a script (playbook) that can be executed both locally and remotely via an ssh connection. There is also Ansible-pull, this is a script that checks the git-repository with your playbooks and, if there are any changes, launches the playbook for applying updates. Where you can not push in most cases, you can use pull, but there are exceptions.
In the article, I will try to talk about how you can use dynamic port allocation for ssh-tunnels in the implementation of the
provisioning-callback function.
for the poor on any server with OpenSSH and Ansible, and how I got here.
So, what if you still need a (central) server on which the ansible project will be stored, perhaps even with secret keys and access to the entire infrastructure. A server to which (for example) new hosts will be able to connect for initial configuration (initialization). Sometimes such initialization can affect other hosts, such as the http balancer. Here, at an opportune moment, remote ssh-port forwarding and reverse connection (ssh back-connect) can help.
In principle, using tunnels, you can make different useful things, in particular, the reverse connection is useful for:
- Remote support of machines for NAT (helpshell type, set up the environment, repair something, etc.);
- Run backup (I do not know why, but you can);
- Access settings to your workplace in the office.
In general, here you can probably have something else to think about, it all depends on your imagination. For the time being, we’ll dwell on what a reverse ssh connection is, more on that later.
')
Quick reference how reverse connect happens
No magic, only OpenSSH-Client and OpenSSH-Server. The whole process of creating a tunnel by the client in one team:
ssh -f -N -T -R22222:localhost:22 server.example.com
-f - switch to background mode; (this key and the next two are optional)
-N - do not execute remote commands;
-T - disabling the pseudo-terminal (pts) is useful if you want to run this command on the crown.
-R [bind_address:] port - the default binding on the server is 127.0.0.1, the port can be arbitrary (from the top ports), we set 22222. Accordingly, you can connect back to 127.0.0.1 port 22 of the client.
Once on the server you can simply execute:
ssh localhost -p22222
And start to perform some actions for remote support / configuration / backup / execution of other commands.
A little more about setting and authorization
If you know all about it, you can skip this part.
Read if you do not know how to configure access by keysSuppose we have the user ansible on the central server (SCM / Backup / CI / etc) and the same user is on the client machine (the names are not fundamental, they may be different). Both have openssh- (server / client) installed.
On the client machine (as well as on the server) the ssh key is generated (for example, rsa).
ssh-keygen -b 4096 -t rsa -f $HOME/.ssh/id_rsa
The client and server administrator exchange public keys. The administrator of the central server should write something like this to authorized_keys:
$ cat $HOME/.ssh/authorized_keys command="echo 'Connect successful!'; countdown 3600",no-agent-forwarding,no-X11-forwarding" ssh-rsa AAAAB3NzaC1...JhPWP ansible@dev.example.com
Read more about options in the “man authorized_keys”. In my case, the function that does the countdown is triggered, after an hour the shutdown occurs (the -f / -N keys are not used).
After that, the client will be able to make backconnect to the server and see something like this:
ansible@dev:~$ ssh -f -N -T -R22222:localhost:22 server.example.com Connect successful! 00:59:59
Having seen the countdown, the user (developer / accountant ?!) happily informs the server administrator what is the reception (if he doesn’t know yet) and you can already do something there.
The administrator only needs to connect using an ssh client and start shamanizing:
ansible@server:~$ ssh localhost -p22222
Everything is simple, clear and accessible.
But what if you need to make such a connection without user intervention and administrator, for example, to backup or automate server configuration when scaling a web project with a dynamic increase in computing power. About this further.
From idea to ready solution
The idea to automate routine processes and keep everything under control is quite intrusive and familiar (perhaps) to any system administrator.
If the servers you now have a full order, then you usually know little about the developer's working environment. In my case, the developer’s working environment is associated with a virtual machine (VM) almost completely repeating production.
People come and go, changing the basic image of VM that we give out to beginners. To synchronize the settings of the local dev-environment with stage / production and less often to do manual work, a playbook was written which applied the roles similarly to the combat environment and the corresponding cron-job was started.
In principle, this is all good, VMs receive updates in pull-mode. But the moment came when we began to store some important keys and passwords in the repository (in encrypted form, of course) and it became obvious that the “security” would lose its meaning if we distribute our vault-password to everyone. Therefore, it was decided to make push changes on the VM using ssh-tunnels.
In the beginning, there was a simple idea to “hammer everything in”, so that the connection was made on predefined ports on the server. And in principle, this is normal if you have 3-5 people, even if 10-15. But what if in half a year there will be 50-100? In general, here you can come up with a kind of “play-book” that will serve all this according to our orders, but this is not our method. I began to think, read mana, google.
If you look at the man (man ssh), you can find the following lines there:
-R [bind_address:]port:host:hostport ... If the port argument is '0', the listen port will be dynamically allocated on the server and reported to the client at run time. When used together with -O forward the allocated port will be printed to the standard output.
Those. An ssh server can dynamically allocate ports, but only the client will know about them. On the server, you can see the list of ports open for listening (netstat / lsof), but considering that there can be several simultaneous connections, this information is rather useless - it is unclear what to tie to.
Then I accidentally came across an
article in which the author said that he wrote a patch for OpenSSH that adds the SSH_REMOTE_FORWARDING_PORTS variable. The variable contains the local ports assigned during initialization of the reverse tunnel. Unfortunately, the patch was never adopted. OpenSSH developers are very conservative. Judging by the correspondence, they in every way otbrykivalis and offered alternative solutions. Perhaps not in vain. :)
After some reflection, I came up with a simple crutch how to tell the server about which port it allocated. When connected to the server, the client can execute commands on it by passing them as a command line argument. This argument is recognized by the server as SSH_ORIGINAL_COMMAND. Nothing prevents us from creating a tunnel in the background to save the output that contains the port, parse it by selecting only the port and transfer it to the server with the next command. And on the server, a script wrapper is executed that substitutes the SSH_ORIGINAL_COMMAND variable as a port for the ansible-playbook connection.
What does this look like?
On the client (a fragment of the script with the function of the connection):
ansible@client:~$ cat ssh-tunnel
The function is performed in two approaches, the first one creates a permanent
multiplexed tunnel, the second one sends the received port value to the server and causes a reverse connection. After the script has completed the connection to the server is closed via the control socket.
Here I had to play a little with the options, so that everything could be started either manually from the terminal or by crown.
For the crown, you need to explicitly set the variables in the crown file that you need to transfer to the script.
On server:
ansible@server:~$ cat initial_run
The key point here is getting the port to which the server needs to be connected from the SSH_ORIGINAL_COMMAND variable. In principle, it was possible to simply assign it to ansible_ssh_port, but I decided that for the order it is worth allocating a separate variable REMOTE_PORT.
The content of playbooks / roles here is no longer fundamental, although I added examples in my repository on
github.com .
This is probably all. What to do with it and how it can be useful - you decide.
I would point out a couple of interesting usage scenarios:
- Dynamic allocation of servers and their automatic configuration (bundle load-balancer / app-server);
- Support in a consistent state of geographically scattered servers to which there is no direct access (different branches, offices, etc.).
Offer your options in the comments, tell about more interesting implementations of this functionality.
I would be grateful if you report found "ochepyatkah" me in PM.