What is interesting here?
This article is intended for those who use or think to use SaltStack as a tool for configuration management. I will try to very briefly share the experience of using this system for flexible management of service configurations using the example of
Tinyproxy .
This is the second article in the SaltStack series, first read
here .
SaltStack: ideology of building configurations
I recall that in SaltStack for the configurations of managed machines, the concept of
state (state) was introduced, changes in which are made on the master, followed by execution on all slave machines. The model is very similar to the same
Puppet with its manifests, but in SaltStack there is one, in my opinion, an advantage - the execution of states is initiated from the master, and not by the clients themselves as implemented in Puppet.
But, more to the point. Having played around with the salute for a while, having tried various ways of organizing the state data (sls files), I came to a generalized model suitable for the majority of the projects I serve. The essence of the model is based on the inheritance and redefinition of the Service / Resource / Project relationship and their descriptions in terms of SaltStack. This will be the next article. Now I will use the methodology of this model to describe the management of the TinyProxy service, without really going into the details of the model itself.
')
Initial description of the state
So, I will not say in detail what TinyProxy is and why it is needed (knowing it is clear, inquisitive - Google to help), I can only say that I use it in one of the projects to provide a proxy service to my clients. Scheme: 20-30 servers with TinyProxy scattered around the world. Installation and configuration is extremely simple, so we’ll omit the detailed description and dwell only on what’s important for the task, which in this case is the following: limit access to proxy services based on the client’s IP address. In terms of configuration, TinyProxy is the Allow directive.
Actually the state that creates the Service (in terms of my model) TinyProxy:
tinyproxy.slstinyproxy-config: file.managed: - name: /etc/tinyproxy.conf - source: salt://__DEFAULT-CONFIGS/tinyproxy.conf - template: jinja - require: - pkg: tinyproxy-pkg tinyproxy-pkg: pkg.installed: - name: tinyproxy tinyproxy-service: service.running: - name: tinyproxy - full_restart: True - require: - pkg: tinyproxy-pkg - watch: - file: tinyproxy-config
Important points:
- We take the /etc/tinyproxy.conf file under management
- Its prototype (template) is on the salt master: //__DEFAULT-CONFIGS/tinyproxy.conf
- We inform the state that this file needs to be processed using Jinja ( - template: jinja ) and it has standardization commands (which will be described below)
Everything else in the standard is pretty standard: installing the package (good in most Linux systems, TinyProxy is available out of the box), taking control of the system service and linking its restart to changes to the configuration file. We abstract from the fact that in different systems a file can be located in different directories relative to / etc.
part of tinyproxy.conf with Jinja template . . .
The important point: how to create templates correctly and why there’s a dash near% you can read
here ; data for the template is taken from the
Pillar system.
Pillar itself (in terms of my model - Resource) for these cases looks like this:
tinyproxy-pillar.sls tinyproxy: allowed_ips: - 1.2.3.4 - 2.3.4.5 - 3.4.5.6
That is, the entire sequence looks like this: Each time the state is started on the slave machines, the tinyproxy.conf file is run through the Jinja template engine, which implants the necessary data taken from the pillar into it and is sent to the client and then restarted.
final tinyproxy.conf: . . .
What is the result?
All these manipulations were designed to ensure that in case you have to add or remove an IP address of a client (in accordance with the access policy), just correct the data in the pillar file (add or delete a line) and run state.highstate for all proxies '* proxy *'.