I have a pet project that I do in my free time. This project is completely devoted to infrastructure experiments. I use SaltStack for configuration management. SaltStack is a centralized infrastructure management system. This means there is a master server that configures the slave servers.
During the life of the project, I stepped on a small set of rakes, but in the end I came to a very convenient approach to working with him. In general, about this and the article - how it all began and what it came to.
The whole project was monolithic, it had everything:
The whole project was in one git repository, which was connected to the master server via gitfs . It is terribly convenient - no need to worry about updating the data on the master server. SaltStack collects everything from the repository itself.
I could raise a test copy of my infrastructure and test everything through it using a separate branch in the git repository. But to raise a copy of test infrastructure is expensive:
On the other hand, the “battle” is one continuous “test” and it's okay if I break it (well, as it’s not scary, it hurts). And since it's not scary, then every change, including an intermediate one, I deploy via push to the repository. Commit-log began to look creepy, to say the least:
In fact, not everything was so bad, but on the whole the picture gives an example of the correct%)
Then it became even worse - over time, I began to forget what state it is doing and how it does it. Still, this is a personal project and I work on it not always, but from time to time. The README file has already poorly solved this problem.
There was also the connectivity of states through data. Different states used the same data, if the data structure was changed for one state - others were guaranteed to break. Because of this, at some point in time, the "combat" configuration is in a state of "gut out". I can fix the bug for several days, which means that at this time some of the states could remain inoperative. In general, this spoke of the poor design of the project architecture.
But all these cons I was a little worried. In my mode, I was ready to live with them. I could not put up with only one thing - by changing something in the data structure, I forgot to check all the states. Catching then such "pending" errors is long and dumb.
I realized that if I write tests, I will have a guarantee that if I changed something, then auto-tests will check the result of all states. Hooray! Everything is quite simple. The task is clear: I want to check the result of the work of the states in the project.
So, what do we have for testing? The output of SaltStack is configuration files, services, Docker containers, firewall settings, SElinux, and so on. This is all very well tested using Serverspec tests .
I began to recall the conferences where I was, to recall the articles I had on this topic. In general, only one author, Igor Kurochkin IgorITL , who I listened to live at DevConf'e 2015, was spinning in Russian from actual and good in my head. You can watch his report “We are testing infrastructure as a code”:
I also found a good article for understanding the problem "Agile DevOps: Test-driven infrastructure" .
After reading all the materials, I realized that the KitchenCI tool is suitable for my task, since it:
I considered that, it seems, now I know everything. There is a theory, everything went in my head - now you can definitely start writing tests, isn't it?
I looked at the project and saw that somehow the theory in my head, well, doesn’t fit my reality at all. How to approach my project? Where to put the tests? How to run them?
In search of an answer, I again climbed carefully to the KitchenCI documentation. Unfortunately, it greatly distracts the specialization of this tool in Chef and its features. Examples again everything for him.
Let's take a closer look at KitchenCI. In this tool we operate with the following objects:
I use SaltStack. Google tells us that there is a third-party project 'kitchen-salt' , which implements the provisioner salt_solo for SaltStack. There is also a detailed lesson and an example of how to use it.
After reading the KitchenCI and kitchen-salt documentation, I took out the main thing - individual recipes are being tested (in Chef terminology), and not the entire configuration. In SaltStack, an analogue of Chef's recipes are formulas - independent states, made into an independent project. These formulas are used to reuse code in other projects. For example, a whole bunch of such formulas are available on GitHub .
This is the main reason why my project is "not suitable" for KitchenCI - it is monolithic. The words “refactoring,” “code connectivity,” “modular approach,” and the like spun in my head. I felt sad. As a non-programmer, I should not know such words.
Challenge accepted! As I recall the first rule of refactoring, it should have a clear, achievable and measurable goal. This is usually a detailed answer to the question “Why do we make changes?”. In my case, it was worded as follows:
pillar.example
file, with an example of the data storage structure that this state expects;Having made a list of tasks, I again became sad, drank coffee and went to do. The state of the state turned into separate formulas. Taking out the state of the main project, I added a link to a new formula to the master server configuration. Thus, the performance of the project did not suffer much throughout the rework.
Due to the connectedness of some of the states, I had to revise the data storage structure - I made it more independent, broke into non-intersecting parts for different formulas, and at the same time tried to avoid duplication of data. This was perhaps the most difficult. Because of this, some of the logic of some states was transferred to others.
The result of the work was almost two dozen separate formulas, simple and clear, with examples of the data used and minimal documentation. The resulting data structure has become noticeably simpler. Even at this stage, I felt a positive result - I now knew that my formulas are independent, and you can more boldly make changes to them.
As soon as I began to consider a separate formula as an object of testing, I immediately had a picture in my head about how to apply KitchenCI. Let's analyze the testing process by the example of the simplest formula "Common packages". This formula installs system packages that I expect to meet on any of my servers. These are just habitual utilities for me.
NB! Further in the text, all commands are executed in the root of the draft formula.
This is what the original file structure of the formula looks like:
.git common-packages/init.sls pillar.example README.md
init.sls
:
packages: pkg.latest: - pkgs: {%- if pillar['packages'] is defined %} {%- for package in pillar['packages'] %} - {{ package }} {% endfor %} {% endif %}
Sample data, pillar.example
:
packages: - bind-utils - whois - git - psmisc - mlocate - openssl - bash-completion - net-tools
For KitchenCI, we will need the installed Vagrant and ruby ​​(and gem bundler, of course). Create a Gemfile
with the list of required ruby ​​gems in the root of the project of our formula:
source "https://rubygems.org" gem "test-kitchen" gem "kitchen-salt" gem "kitchen-vagrant"
Install the following dependencies:
$ bundle install
Let KitchenCI create us a stub structure and files for tests:
$ sudo kitchen init -P salt_solo
We have appeared:
test/integration/default
chefignore
file, which we can safely remove, is a "legacy" of the tight integration of KitchenCI and Chefthe .gitignore
file (if it was not created by you before) where the lines were added:
.kitchen/ .kitchen.local.yml
.kitchen.yml
file with the following contents --- driver: name: vagrant provisioner: name: salt_solo platforms: - name: ubuntu-14.04 - name: centos-7.2 suites: - name: default run_list: attributes:
We .kitchen.yml
in .kitchen.yml
description of our formula:
--- driver: name: vagrant provisioner: name: salt_solo formula: common-packages # <- pillars-from-files: packages.sls: pillar.example # <- pillar.example, pillars: # <- (pillar'), ! top.sls: base: '*': - packages state_top: # <- state.sls base: '*': - common-packages platforms: - name: centos-7.2 # <--- CentOS 7, suites: - name: default run_list: attributes:
In general, everything is ready. Let's create a virtual machine, configure it and run the formula in it:
$ kitchen converge centos-7.2
Yes, KitchenCI did the following for us:
Ho ho! I can now develop formulas and fix bugs in them without having to commit intermediate changes to the master and upload them to the “battle”. The "combat" infrastructure will be noticeably more stable and, it seems, my commit-log now will not be ashamed to show if you suddenly have to.
You can see with your hands the result of the formula, going inside the machine:
$ kitchen login centos-7.2
I learned how to run formulas using KitchenCI and test their performance. Checking with your hands is great. But where are the autotests? Let's still check the result of the formula autotests.
To do this, follow these steps:
./test/integration/default/serverspec
Attention! The _spec suffix is ​​required. You can read about this and other nuances and generally get acquainted with Serverspec on the official website: http://serverspec.org/ .
require 'serverspec' # Required by serverspec set :backend, :exec describe package('bind-utils') do it { should be_installed } end describe package('whois') do it { should be_installed } end describe package('git') do it { should be_installed } end describe package('psmisc') do it { should be_installed } end describe package('mlocate') do it { should be_installed } end describe package('openssl') do it { should be_installed } end describe package('bash-completion') do it { should be_installed } end describe package('net-tools') do it { should be_installed } end
To save time and not wait again until the machine is created and configured from scratch, let's just ask KitchenCI to run away the tests:
$ kitchen verify centos-7.2
That's all the magic.
KitchenCI allows you to do all the above steps with one command: kitchen test. A virtual machine will be created, the formula and tests will be run, then the machine will be destroyed.
kitchen-salt can test not only individual formulas, but also their sets. That is, you can easily test the final result of several formulas. This check will show whether your formulas can work together and give the expected result. All this is possible thanks to the various combinations of the options of the provisioner: https://github.com/simonmcc/kitchen-salt/blob/master/provisioner_options.md . And this means that I could easily tie KitchenCI and tests to the original view of my project, but it seems to me that the result was much better.
Now I slowly cover my old formulas with tests and write new ones, and I am writing much faster than before. And at any moment I am confident in the performance of my formulas, both new and old. Yes, despite the time spent on refactoring and writing tests, I received a clear increase in working with my pocket project. Now there are no concerns that, having postponed the project for a long period of time, I will not be able to continue it due to the complexity of the project itself or it is not clear why not working formulas. Yes, refactoring ate several days of my personal time. Yes, writing tests is boring, but they give a feeling of confidence in the project. Great feeling.
I will be glad to answer questions, listen to comments and tips for the future :)
salt_solo
: https://github.com/simonmcc/kitchen-saltSource: https://habr.com/ru/post/301812/
All Articles