In this post I would like to share my experience of using the SaltStack configuration management system, and, in particular, its use in Masterless mode using the salt-ssh component.
In fact, salt-ssh is analogous to the Ansible system.
salt-ssh '*-ec2.mydomain.com' test.ping
The following topics will be covered:
When I several years ago, having enough of puppet (multiple environments, 100+ nodes), chose a new configuration management system for a new project, then Masterless mode of operation was a key requirement. But I also wanted to keep the opportunity Master-Slave mode of operation. I wanted good, extensive documentation and flexibility. I wanted to be able to manage cloud infrastructures.
I also wanted to build a system in which a variety of environments would be easy to get along with. All this was done with salt-ssh.
Salt-ssh is a component of SaltStack , which, like Ansible , uses ssh to connect to remote machines and does not require any preliminary configuration on the part of remote machines. No agents. Pure ssh!
Of course, when choosing a system, I considered Ansible. But the scale then leaned toward SaltStack.
Unlike Ansible, SaltStack uses Jinja2 for both template processing and logic building.
Moreover, this logic can be built almost by any means. What is good and bad on the one hand. Good tk gives flexibility. But bad, because There is no one standard way and approach to implementation. It even seems to me that SaltStack is more of a constructor in this regard.
Also, the rendering of patterns and logic occurs during the startup phase. The resulting package of templates, settings and instructions is copied to a remote server and executed. Upon completion, salt-ssh issues a report to the console, what was executed, and what errors occurred with, if any. Here the difference with ansible is very striking. The latter performs tasks \ playbooks alternately, in shell script mode. I will not deny that it is more pleasant to watch the progress of the execution of ansible scripts, but gradually it all fades into the background when the number of hosts exceeds a few dozen. Also, in comparison with ansible, SaltStack has a higher level of abstraction.
Anyway, both ansible and salt-ssh are two very interesting tools, each of which has its own advantages and disadvantages.
SaltStack is a configuration and infrastructure management system. Both at the level of individual servers, and in various cloud platforms ( SaltCloud ). It is also a remote command execution system. Written in python. Very rapidly developing. It has many different modules and features, including even such as Salt-Api and Salt-Syndic (master of masters or a system that allows you to build a hierarchy of master servers, tobish a syndicate).
By default, SaltStack implies a Master-Slave mode of operation. Messaging between nodes takes place via the ZeroMQ protocol. Able to scale horizontally using MultiMaster settings.
But best of all, Salt can still work in Agentless mode. What can be implemented with the help of the local launch of the state or with the help of salt-ssh, the hero of this topic.
Salt master is a process running on a machine with which the connected agents are managed. In the case of salt-ssh, we can call the master that node where our state and pillar data lie
Salt minion is a process running on managed machines, i.e. slave In the case of salt-ssh minion, this is any remote server.
State - a declarative representation of the state of the system (analog playbooks in ansible)
Grains - static information about a remote minion (RAM, CPUs, OS, etc)
Pillar - variables for one or more minions
top.sls - central files that implement logic which state and pillar data to which minion to assign
highstate - all defined state data for minion
SLS - all configuration files for pillar \ states in SaltStack are so called, YAML is used
One of the drawbacks of the SaltStack system is a higher entry threshold. Next, I will show examples, so getting started with this great system was easier.
Setting salt-ssh is trivial.
The site https://repo.saltstack.com/ has all the necessary repositories and instructions for connecting them to various systems.
Only salt-ssh is needed for installation.
sudo apt-get install salt-ssh
(For example, Deb systems)
To start using salt-ssh, we just need to install it. At a minimum, you can manage your local machine, or any remote server, which is much clearer.
In this example, for the tests, I will use two virtual machines created using Vagrant . The salt-ssh itself will be installed on one of them, the other will be clean, not counting the connected public key from the first machine.
The Vagrantfile itself and the necessary salt states are uploaded to the repository https://github.com/skandyla/saltssh-intro .
# -*- mode: ruby -*- # vi: set ft=ruby : Vagrant.configure(2) do |config| # VM with salt-ssh config.vm.define :"saltsshbox" do |config| config.vm.box = "ubuntu/trusty64" config.vm.hostname = "saltsshbox" config.vm.network "private_network", ip: "192.168.33.70" config.vm.provider "virtualbox" do |vb| vb.memory = "512" vb.cpus = 2 end config.vm.synced_folder ".", "/srv" # Deploy vagrant insecure private key inside the VM config.vm.provision "file", source: "~/.vagrant.d/insecure_private_key", destination: "~/.ssh/id_rsa" # Install salt-ssh config.vm.provision "shell", inline: <<-SHELL wget -O - https://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest/SALTSTACK-GPG-KEY.pub | sudo apt-key add - sudo echo 'deb http://repo.saltstack.com/apt/ubuntu/14.04/amd64/latest trusty main' > /etc/apt/sources.list.d/saltstack.list sudo apt-get update sudo apt-get install -y salt-ssh SHELL end # VM for testing config.vm.define :"testserver" do |config| config.vm.box = "ubuntu/trusty64" config.vm.hostname = "testserver" config.vm.network "private_network", ip: "192.168.33.75" config.vm.provider "virtualbox" do |vb| vb.memory = "512" end # Deploy vagrant public key config.vm.provision "shell", inline: <<-SHELL curl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub >> ~/.ssh/authorized_keys2 2>/dev/null curl https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub >> /home/vagrant/.ssh/authorized_keys2 2>/dev/null SHELL end end
I assume that the audience is familiar with Vagrant, but just in case: Vagrant is a kind of virtualization framework designed to simplify the development process and make it replicable. To run the virtual machines, we will need the Vagrant and Virtualbox installed.
Next we clone the repository:
git clone https://github.com/skandyla/saltssh-intro
and in it we initialize the Vagrant virtual machines:
vagrant up
After the launch of the latter, go to the saltsshbox:
vagrant ssh saltsshbox
All further work will be carried out from this virtual machine. By default, SaltStack assumes that we will act as root, so we immediately do:
vagrant@saltsshbox:~$ sudo -i
Target hosts are written to the / etc / salt / roster file , however, you can specify any third-party roster file. In a sense, you can draw analogies with inventory files ansible. Roster file, is a YAML, with many different options. Below are several ways to record the same host.
testserver: host: 192.168.33.75 priv: /home/vagrant/.ssh/id_rsa thesametestserver: host: 192.168.33.75 user: vagrant sudo: True thesametestserver2: host: 192.168.33.75 user: vagrant passwd: vagrant sudo: True
Now we will try to execute the test.ping
command for all the hosts specified in our roster.
root@saltsshbox:~# salt-ssh -i --roster-file=/srv/saltstack/saltetc/roster_test '*' test.ping Permission denied for host thesametestserver, do you want to deploy the salt-ssh key? (password required): [Y/n] n thesametestserver: ---------- retcode: 255 stderr: Permission denied (publickey,password). stdout: testserver: True thesametestserver2: True
As you can see, salt-ssh slightly cursed that he could not go to the remote server and offered to deploy the key there, but I canceled it. The remaining two servers (in fact, one under different names) responded positively. This happened because we are running as root, for which no ssh keys are defined. Therefore, you can simply add the key through the ssh-agent and repeat the command again.
root@saltsshbox:~# eval `ssh-agent`; ssh-add /home/vagrant/.ssh/id_rsa Agent pid 2846 Identity added: /home/vagrant/.ssh/id_rsa (/home/vagrant/.ssh/id_rsa) root@saltsshbox:~# salt-ssh -i --roster-file=/srv/saltstack/saltetc/roster_test '*' test.ping testserver: True thesametestserver: True thesametestserver2: True
Now everything is good! Moreover, you can easily add a key with a password via ssh-agent. But if you decide to deploy a key that the salt itself suggests, then it will take it by default here: /etc/salt/pki/master/ssh/salt-ssh.rsa
Here, for the test, I deliberately worked with a separate roster file to show interesting nuances. For further work, we will not need to specify the roster, because it is already indicated through symlink to the required place ( / etc / salt / roster ). The -i switch is necessary when we start working with new hosts; it simply disables StrictHostKeyChecking, giving the opportunity to accept a new host key. For further work, we also will not need it.
root@saltsshbox:~# salt-ssh '*' test.ping testserver: True
Let me remind you that by default salt looks at roster here: / etc / salt / roster in which we now have only one host defined.
Now that we’ve seen that our salt-ssh machine perfectly sees the test server specified in the roster, we’ll work with it in ad-hoc style.
root@saltsshbox:~# salt-ssh testserver cmd.run "uname -a" testserver: Linux testserver 3.13.0-87-generic #133-Ubuntu SMP Tue May 24 18:32:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
cmd.run is essentially the equivalent of the -a
key in ansible.
You can also use the built-in module saltstack, for example:
salt-ssh testserver service.get_enabled salt-ssh testserver pkg.install git salt-ssh testserver network.interfaces salt-ssh testserver disk.usage salt-ssh testserver sys.doc
The last command will issue documentation about the modules and, most importantly, examples of their use. Additionally, you can view a complete list of available Saltstack modules .
Grains is a powerful mechanism representing a collection of facts about a remote system. In the future, on the basis of the Grains, you can also build different logic.
But first, let's see how to start working with them:
root@saltsshbox:~# salt-ssh testserver grains.items testserver: ---------- SSDs: biosreleasedate: 12/01/2006 biosversion: VirtualBox cpu_flags: - fpu - vme - de - pse - tsc ... cpu_model: Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz cpuarch: x86_64 disks: - sda ...
Command output is trimmed.
The desired Grains branch can be accessed by directly specifying it:
root@saltsshbox:~# salt-ssh testserver grains.get 'ip4_interfaces' testserver: ---------- eth0: - 10.0.2.15 eth1: - 192.168.33.75 lo: - 127.0.0.1
Or even more specifically:
root@saltsshbox:~# salt-ssh testserver grains.get 'ip4_interfaces:eth1' testserver: - 192.168.33.75
Now it's time to tell about another important file / etc / salt / master . In general, it comes bundled with the salt-master package and defines some important logging options and directories in which the salt will look for our states and pillar data. The default for states is the / srv / salt directory. But in practice it is often more rational to use a different structure, including for these examples.
/ etc / salt / master:
state_verbose: False state_output: mixed file_roots: base: - /srv/saltstack/salt pillar_roots: base: - /srv/saltstack/pillar
state_verbose and state_output are variables that are responsible for displaying the execution status on the screen. In my opinion, this combination is the most practical, but I recommend to experiment.
file_roots and pillar_roots indicate the paths to our state and pillar data, respectively.
Important! These paths may be several. According to the principle of different environments, different data, etc. etc., but this is a topic for a separate article on setting up a multi-environment environment, for a start, we just need to know where to put our state files for salt to find.
Further, in each of these directories ( file_roots and pillar_roots ), salt will look for the files top.sls , which define the further logic of processing salt files.
In our case:
/srv/saltstack/salt/top.sls:
base: '*': - common - timezone 'testserver': - chrony
Which means for all hosts to apply state common and timezone, and for testserver to apply also chrony (time synchronization service).
For pillar, the top.sls file is also required. Which will determine in what order and how variables will be assigned.
/srv/saltstack/pillar/top.sls:
base: '*': - timezone 'testserver': - hosts/testserver
In our case, this file is extremely simple, it is indicated only to connect all the variables from the timezone.sls file and also connect the variables from the hosts / testserver file for our testserver, however, behind this simplicity is a powerful concept, since variables can be assigned as you please and for any environment. True, overlapping and merging of variables (Variables Overriding and Merging) is a separate topic, for now I’ll say that priority is set from top to bottom. Those. if we had variables with timezone here in the hosts / testserver.sls file, they would have an advantage.
In the files top.sls everything is indicated without the .sls extension.
Let's get down to a simple state:
/srv/saltstack/salt/packages.sls :
# Install some basic packages for Debian systems {% if grains['os_family'] == 'Debian' %} basepackages: pkg.installed: - pkgs: - lsof - sysstat - telnet {% endif %}
As you can see, here we used both jinja and grains and the pkg module itself.
Let's try to apply this state in test mode:
root@saltsshbox:/srv/saltstack# salt-ssh testserver state.sls packages test=true [INFO ] Fetching file from saltenv 'base', ** done ** 'packages.sls' testserver: Name: basepackages - Function: pkg.installed - Result: Differs Summary for testserver ------------ Succeeded: 1 (unchanged=1) Failed: 0 ------------ Total states run: 1
And then in the real:
root@saltsshbox:/srv/saltstack# salt-ssh testserver state.sls packages [INFO ] Fetching file from saltenv 'base', ** skipped ** latest already in cache 'salt://packages.sls' testserver: Name: basepackages - Function: pkg.installed - Result: Changed Summary for testserver ------------ Succeeded: 1 (changed=1) Failed: 0 ------------ Total states run: 1
The next important link is Pillar . So in SaltStack is called variables, or everything that is set by the wizard for remote systems. In part, they are already familiar to you from the above, therefore, immediately to the point.
Get all the pillar variables defined for the host:
root@saltsshbox:~# salt-ssh testserver pillar.items testserver: ---------- chrony: ---------- lookup: ---------- custom: # some custom addons # if you need it timezone: ---------- name: Europe/Moscow
As with Grains, you can request a single variable:
salt-ssh testserver pillar.get 'timezone:name'
Consider the following state:
/srv/saltstack/salt/timezone.sls:
{%- set timezone = salt['pillar.get']('timezone:name', 'Europe/Dublin') %} {%- set utc = salt['pillar.get']('timezone:utc', True) %} timezone_settings: timezone.system: - name: {{ timezone }} - utc: {{ utc }}
Here we set a variable based on data from pillar. And in this design:
{%- set timezone = salt['pillar.get']('timezone:name', 'Europe/Dublin') %}
Europe / Dublin is the default value if for some reason the salt cannot get the value from Pillar.
root@saltsshbox:/srv/saltstack# salt-ssh testserver state.sls timezone [INFO ] Fetching file from saltenv 'base', ** skipped ** latest already in cache 'salt://timezone.sls' testserver: Name: Europe/Moscow - Function: timezone.system - Result: Changed Summary for testserver ------------ Succeeded: 1 (changed=1) Failed: 0 ------------ Total states run: 1
And finally, we have already reached a real life example. Consider state time synchronization - chrony. It is here with us:
/srv/saltstack/salt/chrony/init.sls
Moreover, init.sls is the default index, salt looks for it automatically, but you can use any other file.
Here we will introduce another typical construction for salt - this is map.jinja .
/srv/saltstack/salt/chrony/map.jinja:
{% set chrony = salt['grains.filter_by']({ 'RedHat': { 'pkg': 'chrony', 'conf': '/etc/chrony.conf', 'service': 'chronyd', }, 'Debian': { 'pkg': 'chrony', 'conf': '/etc/chrony/chrony.conf', 'service': 'chrony', }, }, merge=salt['pillar.get']('chrony:lookup')) %}
Its purpose is to create the necessary set of static variables for our system, but with the possibility of merging with variables from pillar, if you suddenly need to specify those.
Next, /srv/saltstack/salt/chrony/init.sls itself :
{% from "chrony/map.jinja" import chrony with context %} chrony: pkg.installed: - name: {{ chrony.pkg }} service: - name: {{ chrony.service }} - enable: True - running - require: - pkg: {{ chrony.pkg }} - file: {{ chrony.conf }} {{ chrony.conf }}: file.managed: - name: {{ chrony.conf }} - source: salt://chrony/files/chrony.conf.jinja - template: jinja - user: root - group: root - mode: 644 - watch_in: - service: {{ chrony.service }} - require: - pkg: {{ chrony.pkg }}
Here, the salt pattern deserves special attention : //chrony/files/chrony.conf.jinja of the jinja format.
/srv/saltstack/salt/chrony/files/chrony.conf.jinja:
# managed by SaltStack {%- set config = salt['pillar.get']('chrony:lookup', {}) -%} {%- set vals = { 'bindcmdaddress': config.get('bindcmdaddress','127.0.0.1'), 'custom': config.get('custom', ''), }%} ### chrony conf server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress {{ vals.bindcmdaddress }} bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony {% if vals.custom -%} {{ vals.custom }} {%- endif %}
In this template, we also request variables from Pillar and process them. You can see how this state was perceived by salt using state.show_sls :
root@saltsshbox:/srv/saltstack# salt-ssh testserver state.show_sls chrony [INFO ] Fetching file from saltenv 'base', ** done ** 'chrony/init.sls' [INFO ] Fetching file from saltenv 'base', ** done ** 'chrony/map.jinja' testserver: ---------- /etc/chrony/chrony.conf: ---------- __env__: base __sls__: chrony file: |_ ---------- name: /etc/chrony/chrony.conf |_ ---------- source: salt://chrony/files/chrony.conf.jinja |_ ---------- template: jinja |_ ---------- user: root |_ ---------- group: root |_ ---------- mode: 644 |_ ---------- watch_in: |_ ---------- service: chrony |_ ---------- require: |_ ---------- pkg: chrony - managed |_ ---------- order: 10002 chrony: ---------- __env__: base __sls__: chrony pkg: |_ ---------- name: chrony - installed |_ ---------- order: 10001 service: |_ ---------- name: chrony |_ ---------- enable: True - running |_ ---------- require: |_ ---------- pkg: chrony |_ ---------- file: /etc/chrony/chrony.conf |_ ---------- order: 10000 |_ ---------- watch: |_ ---------- file: /etc/chrony/chrony.conf
Next, just execute it:
root@saltsshbox:/srv/saltstack# salt-ssh testserver state.sls chrony testserver: Name: chrony - Function: pkg.installed - Result: Changed Name: /etc/chrony/chrony.conf - Function: file.managed - Result: Changed Name: chrony - Function: service.running - Result: Changed Summary for testserver ------------ Succeeded: 3 (changed=3) Failed: 0 ------------ Total states run: 3
Here, salt reports 3 completed state by the total number of modules involved. If you rerun, you can see that no changes were made:
root@saltsshbox:/srv/saltstack# salt-ssh testserver state.sls chrony testserver: Summary for testserver ------------ Succeeded: 3 Failed: 0 ------------ Total states run: 3
You can immediately see how the configuration file for chrony was formed:
salt-ssh testserver cmd.run 'cat /etc/chrony/chrony.conf'
Finally, it is worth mentioning another command state.highstate .
salt-ssh testserver state.highstate
It applies all the prescribed state for our test server.
So, we learned what salt-ssh from SaltStack is and how to use it. We learned the key features of building the environment necessary for salt-ssh to work. Set up a test environment using Vagrant. And systematically conducted experiments with the fundamental concepts of SaltStack, such as: Grains, States, Pillar. We also learned how to write state from simple to complex, reaching real examples that allow building further automation on our base.
That's all for now. There are still many interesting topics left overboard, but I hope that this information will help to start working with this wonderful configuration management system.
Useful information:
best_practices
walkthrough
starting_states
pillar
formulas
tutorials
Source: https://habr.com/ru/post/303418/
All Articles