
In the work of the system administrators team there comes a time when there are too many supported servers. And maybe there are a lot of people, and, again, security: if something went wrong, you need to delete the keys from everywhere.
We have 300 customers. Someone is "only", but for us - this is almost 2000 servers to maintain. To store, update and manage a database of 2,000 passwords for 60 employees, manage access to it and not explain to the client every time that 60 people will know the passwords to its servers at the same time, we made an authentication server and called it Isolate. Under the cat a description of the functions and a link to Github - we put it in Open Source.
We have separate authorization servers through which employees get to a specific supported server. We ourselves have been successfully using this development for a long time, and now we have decided to give it a name and share it with the community.
')
So, Isolate is a set of utilities for the auth server and the ansible-playbook for quickly deploying it. It allows us to log in using the hardware key (security is above all!) And conveniently manage a huge number of projects / servers. Wherein:
- employees do not know the root password (again, security);
- in case of emergency situations, the employee's hardware key is deactivated on the auth server, and it loses access to client servers (fortunately, we did not have such situations);
- All SSH sessions are recorded - you can count the time spent on the servers.
Taking the server to support, we create a sudo-user on it and prescribe the auth key of the servers. Then the employee logs in to the auth server using the hardware key (we use yubikey), using the s (search) command to find the required server (by the name of the project, server, website, etc.) and using the g (go) command to connect to it via SSH.
Highlights Isolate:
- users do not have access to the private key;
- all outgoing SSH sessions are logged;
- only system access controls are used (SELinux support in the near future);
- Isolate is logged with a one-time password (2FA, OTP); you can use either the hardware keys or the favorite Google Authentificator;
- SSH configuration manager with connectivity through an SSH proxy server, server support in VPN through an external gate;
- installed via Ansible, but requires intervention in system files (in manual mode);
- supported by CentOS 7, Ubuntu 16.04, Debian 9.
What it looks like
Sample server list:
[~]$ s . myproject ------ 10001 | 11.22.22.22 | aws-main-prod 10002 | 11.33.33.33 | aws-dev 10003 | 11.44.44.44 | vs-ci ------ Total: 3 [~]$
dot
s .
in this case, as a universal pattern for searching all servers.
An example of logging in to the server with a custom port and SSH-proxy:
[~]$ g myproject aws-dev Warning: Permanently added 3.3.3.100 (RSA) to the list of known hosts. Warning: Permanently added 10.10.10.12 (RSA) to the list of known hosts. [root@dev ~]$
An example of logging into an arbitrary server (without config in ISOLATE) with arbitrary parameters:
[isolate ~]$ g 45.32.44.87 --user support --port 2232 --nosudo Warning: Permanently added 45.32.44.87 (RSA) to the list of known hosts.
Principle of operation
The installation is described in some detail in the README
on Github , we will immediately discuss the principles of operation.
Access itself is limited by system users of the OS. As an access layer, sudo + ssh.py is used for the wrapper, the purpose of which is to prevent dangerous structures from falling behind sudo; ssh.py verifies the arguments and starts the SSH client, this is where its responsibilities end.
For example:
$ sudo -l (auth) NOPASSWD: /opt/auth/wrappers/ssh.py $ sudo /opt/auth/wrappers/ssh.py -h usage: ssh-wrapper [-h] [--user USER] [--port PORT] [--nosudo] [--config CONFIG] [--debug] [--proxy-host PROXY_HOST] [--proxy-user PROXY_USER] [--proxy-port PROXY_PORT] [--proxy-id PROXY_ID] hostname positional arguments: hostname server address (allowed FQDN,[az-],ip6,ip4) optional arguments: -h, --help show this help message and exit --user USER set target username --port PORT set target port --nosudo run connection without sudo terminating command --debug --proxy-host PROXY_HOST --proxy-user PROXY_USER --proxy-port PROXY_PORT --proxy-id PROXY_ID just for pretty logs ------
This script is also responsible for logging - it generates log file names and their location, determines the name of the user who made sudo, creates directories for log files. Next to each log there is a * .meta file containing the current connection object in JSON.
The helper.py script includes all the basic functions, isolation with ssh.py will allow even complex logic to be implemented without fear of an error with the definition of user rights or some other unsafe function.
The functions used in shared / bootstrap.sh are wrapped in a script.
For example, server search:
s () { if [[ $# -eq 0 ]] ; then echo -e "\\n Usage: s <query> \\n"; return elif [[ $# -gt 0 ]] ; then "${ISOLATE_HELPER}" search "${@}"; fi }
You can work with a proxy without installing additional packages. Enough SSH server and nc / netcat installed on it. You can also use the port forwarding function in modern SSHD / SSH, but this technique is not recommended, as there are still quite a few outdated SSHDs that do not support this function.
When attempting to connect with the / alias g function, helper.py is also invoked, which checks the arguments, classifies the address / IP / FQDN / project, and starts ssh.py with the necessary arguments. If you try to log in via IP / FQDN without specifying project / group, the default config for SSH will be used.
All settings for the server are available only with precise instructions, for example:
$ g rogairoga nyc-prod-1
Or, if the server is located behind the corporate proxy after the project name, you can specify any FQDN / IP address.
$ g rogairoga 192.168.1.1
All the usual additional arguments for g are also available:
$ g rogairoga 192.168.22.22 --port 23 --user support --nosudo
it is also possible to log in by server ID.
$ g 12345
Instead of conclusion
The Isolate source code is laid out
on Github . We hope that our solution will help many DevOps teams to structure and simplify work with servers. We are waiting for comments, suggestions and, of course, a pool of requests! You can suggest ideas or ask questions
in Telegram Chat .
Our future plans:
- access rights (user-project);
- a helper for transferring files via an auth server to / from a specific machine;
- integration with Zabbix (Tech Preview already has!).
And next we want to zaopensorshit our Telegram client - you can read about it
here .