Dear colleagues, I want to present to your attention one of the configuration management systems, completely written in Python. It is quite new, but it deserves attention. If you are wondering how you can manage the fleet of servers and workstations as a single system with the help of this application - I ask under cat.

Why Salt?
Some time ago I thought about the fact that the number of servers I managed increased from a few to more than 20 and continues to grow. There were questions about centralized software updates, changing passwords, quickly raising new virtual machines and similar routine tasks of any IT specialist.
Naturally, I began to study the features of the market. Here, for example, is a list of free management systems on wikipedia:
Comparison_of_open_source_configuration_management_software , which I was actually oriented at when choosing a system to study. When choosing, I used the following criteria:
')
- at least weak, but Windows support
- good linux support
- Open source code
- Not Ruby (I know this is my omission, but I didn’t have a relationship with him)
As you understood, the choice fell on
Salt .
Concept and basic concepts
As I understand it, Salt allows you to solve 2 tasks:
- Centralized command execution on computer groups
- Support for systems in previously described states
The system consists of clients - minion, and servers - master. The connection is outgoing in relation to the minion, therefore NAT and others are not very scary to us.
GrainsGroups of computers are formed on the basis of the so-called "grains" (
Grains ): system parameters collected by the minion service at the time of launch.
For example, if we want to check the availability of all nodes on which CentOS is installed, we just need to write:
salt -G 'os: CentOS' test.ping
Or we want to know the number of cores in all our 64-bit systems:
salt -G 'cpuarch: x86_64' grains.item num_cpus
Also on a par with standard grains we can add our own:
grains:
roles:
- webserver
- memcache
deployment: datacenter4
cabinet: 13
cab_u: 14 - 15
StatesThe second component is the
state files that allow us to describe the requirements for the state of the system and later on the basis of these files the minion can bring the necessary parameters of the client system to the state we need.
But it is somehow difficult to write, I offer an example:
First, let's check in the configuration file of the wizard where the SLS files should be stored
vim / etc / salt / master
looking for something like:
file_roots:
base:
- / srv / salt
Therefore, in this folder (/ srv / salt /) our files will be our SLS files.
The top.sls file located in this folder is required.
Here is an example of the contents of this file on my test machine:
base:
'web2' :
- fail2ban
fail2ban (Third line) can be: either the sls file or the folder in which the init.sls file would lie. The second option seems to me more convenient, for this I did:
The contents of the fail2ban file I have are:
cat fail2ban / init.sls
- fail2ban. conf :
- file . managed :
- - name: / etc / fail2ban / fail2ban. conf
- - source: salt: // fail2ban / fail2ban. conf
- jail conf :
- file . managed :
- - name: / etc / fail2ban / jail. conf
- - source: salt: // fail2ban / jail. conf
- fail2ban:
- pkg:
- - installed
- service. running :
- - enable: True
- - watch:
- - file : fail2ban. conf
- - file : jail. conf
There are 3 entities in this SLS file:
- File fail2ban.conf (line 1)
- Jail.conf file (deadline 6)
- The package itself fail2ban (line 11)
In accordance with the slts file described by us, Salt will do the following:
- Compare files on the client so that it will update the client if there are any changes (lines 2 and 7)
- Check for installed fail2ban package (12-13)
- Check service inclusiveness (15)
- Check autostart service (16)
- If one of the files has changed, it will restart the service (17)
Installation
First of all, I strongly recommend against using the file proposed by the authors of the bootstrap project. Since he does a lot, in my opinion, strange actions. (eg yum – y update and using the epel test repository)
In fact, at least with centos, there is nothing difficult to do.
The salt-master and salt-minion packages are in epel.
The service names on CentOS are the same.
Service configuration files are in / etc / salt /
Master
By default, master opens 2 ports that should be enabled in iptables:
iptables -I INPUT -j ACCEPT -p tcp --dport 4505 : 4506
4505 - to communicate minions with the server
4506 - for file transfer
Minion
The basic configuration of the minion file is to specify the address of the wizard, and I personally add another ID. If this is not done, the FQDN will be used.
Starting, the minion connects to the master and gives him his public key.
Next on the wizard when running the salt-key utility:
[root @ test salt] # salt-key
Accepted Keys:
Unaccepted Keys:
web2
Rejected Keys:
The key we need appears in the Unaccepted Keys list to add it to the allowed:
[root @ test salt] # salt-key -a web2
Key for minion web2 accepted.
On this connection, the first minion is over. And you can check its availability.
[root @ test salt] # salt 'web2' test.ping
web2: true
Well, if we want to check how our fail2ban is configured:
[root @ test salt] # salt 'web2' state.highstate

What can you do with the minions?
List of modules for remote control:
salt.readthedocs.org/en/latest/ref/states/all/salt.states.pkg.html#module-salt.states.pkgA list of modules for managing and monitoring states:
salt.readthedocs.org/en/latest/ref/states/all/salt.states.file.html#module-salt.states.fileQuestions
I have not figured out 2 points yet:
- How can you automate prescription on a key server for full automation? (most likely I will write my own module, with http requests)
- What will happen to the minion if he replace the server. Will he obey this new server? For a question, thanks to my colleague and good friend, Jan.
I will try to answer these questions in the following articles about salt after I try everything myself.
Thank you for your attention to the material. Waiting for comments and comments.