πŸ“œ ⬆️ ⬇️

Ansible: testing playbooks (part 1)



I think any system administrator who uses Ansible to manage his zoo servers wondered about checking the correctness of the description of the configuration of their servers. How not to be afraid to make changes to server configurations?
In a series of articles on DevOps, we will talk about this.


')
Several conditions under which we will perform configuration testing:

1. All configuration is stored in git-repository.
2. Jenkins (CI-service) periodically polls the repository with our roles / playbooks for any changes.
3. When changes appear, Jenkins launches the configuration build and covers it with tests. Tests consist of two stages:
3.1 Test-kitchen takes the updated code from the repository, launches a completely fresh docker-container, floods the updated playbooks from the repository and launches the ansible locally in the docker-container.
3.2 If the first stage was successful, the serverspec is started in the docker-container and checks whether the new configuration correctly stood up.
4. If in the test-kitchen all the tests were successful, then Jenkins initiates the filling of a new configuration.

Of course, you can run each playbook / role in Vagrant (good, there is such a cool thing as provisioning ), to check that the configuration is as expected, but every time to test a new or modified configuration, it is a dubious pleasure to perform so many manual actions. What for? After all, you can automate everything. To do this, we come to such wonderful tools as Test-kitchen , Serverspec and, of course, Docker .

Let's first consider how we test the code in the Test-kitchen using the example of a pair of spherical roles in a vacuum.

Ansible.



Ansible I collected the latest, most recent of the sources. I prefer to collect by hand. (for whom laziness - you can use Omnibus-ansible )
git clone git://github.com/ansible/ansible.git --recursive cd ./ansible 


We compile and install a deb package (we’ll be testing Debian CDs).
 make deb dpkg -i deb-build/unstable/ansible_2.1.0-0.git201604031531.d358a22.devel~unstable_all.deb 


Ansible rose, check:
 ansible --version ansible 2.1.0 config file = /etc/ansible/ansible.cfg configured module search path = Default w/o overrides 


Fine! So it's time to get down to business.

Now we need to create a git repository.
 mkdir /srv/ansible && cd /srv/ansible git init mkdir base && cd base #      


The repository architecture is approximately as follows:
 β”œβ”€β”€ ansible.cfg β”œβ”€β”€ inventory β”‚ β”œβ”€β”€ group_vars β”‚ β”œβ”€β”€ hosts.ini β”‚ └── host_vars β”œβ”€β”€ logs β”œβ”€β”€ roles β”‚ β”œβ”€β”€ common β”‚ β”‚ β”œβ”€β”€ defaults β”‚ β”‚ β”‚ └── main.yml β”‚ β”‚ β”œβ”€β”€ files β”‚ β”‚ β”œβ”€β”€ handlers β”‚ β”‚ β”‚ └── main.yml β”‚ β”‚ β”œβ”€β”€ tasks β”‚ β”‚ β”‚ β”œβ”€β”€ install_packages.yml β”‚ β”‚ β”‚ └── main.yml β”‚ β”‚ β”œβ”€β”€ templates β”‚ β”‚ └── vars β”‚ └── nginx β”‚ β”œβ”€β”€ defaults β”‚ β”œβ”€β”€ files β”‚ β”œβ”€β”€ handlers β”‚ β”‚ └── main.yml β”‚ β”œβ”€β”€ tasks β”‚ β”‚ β”œβ”€β”€ configure.yml β”‚ β”‚ β”œβ”€β”€ install.yml β”‚ β”‚ └── main.yml β”‚ β”œβ”€β”€ templates β”‚ β”‚ └── nginx.conf.j2 β”‚ └── vars β”œβ”€β”€ site.yml β”œβ”€β”€ Vagrantfile └── vars └── nginx.yml 


We will not change the default configuration file; we will only make our own changes to the project configuration file.

ansible.cfg:
 [defaults] roles_path = ./roles/ #    retry_files_enabled = False #  retry-      become = yes #    sudo log_path = ./logs/ansible.log #  inventory = ./inventory/ #   inventory-. 


Next, we need an inventory file, where we need to specify a list of hosts with which we will work.
 mkdir inventory cd invetory mkdir host_vars mkdir group_vars 


Invetory file:
 127.0.0.1 ansible_connection=local 


Here are all the hosts that will be managed by ansible.
host_vars is the folder where variables will be stored, which may differ from the base values ​​in the role.
Example: in ansible the jinja2 template engine can be useful when working with files and configs.
We have a template resolv.conf templates / resolv.conf.j2 :
 nameserver {{ nameserver }} 


The default variable file ( roles / common / defaults / main.yml ) states:
 nameserver: 8.8.8.8 


But on host 1.1.2.2 we need to flood the resolv.conf with a different nameserver value.
We turn it through host_vars / 1.1.2.2.yml :
 nameserver: 8.8.4.4 


In this case, when executing a playbook, a standard resolv.conf (with a value of 8.8.8.8 ) will be flooded onto all hosts, and with a value of 8.8.4.4 on a host 1.1.2.2 .
Read more about this in the Ansible documentation.

common-role



This is the role that performs standard tasks that must be performed on all hosts. Installing any packages, for example, the establishment of users, etc.
I described the structure a bit higher. Let's go over the details.

Role structure:
 ./roles/common/ β”œβ”€β”€ defaults β”‚ └── main.yml β”œβ”€β”€ files β”œβ”€β”€ handlers β”‚ └── main.yml β”œβ”€β”€ tasks β”‚ β”œβ”€β”€ install_packages.yml β”‚ └── main.yml β”œβ”€β”€ templates └── vars 


In the roles / common / defaults / main.yml file , the default variables are specified.
 --- deb_packages: - curl - fail2ban - git - vim rh_packages: - curl - epel-release - git - vim 


The files folder contains files that should be copied to a remote host.
The tasks folder lists all the tasks that must be performed when assigning a role to a host.
 roles/common/tasks/ β”œβ”€β”€ install_packages.yml β”œβ”€β”€ main.yml 


roles / common / tasks / install_packages.yml
 --- - name: installing Debian/Ubuntu pkgs apt: pkg={{ item }} update_cache=yes with_items: "{{deb_packages}}" when: (ansible_os_family == "Debian") - name: install RHEL/CentOS packages yum: pkg={{ item }} with_items: "{{rh_packages}}" when: (ansible_os_family == "RedHat") 


Here, the with_items and when cycles are used. If the distribution package is of the Debian family, packages from the deb_packages list will be installed using the apt module. If the distribution is a RedHat family, packages from the rh_packages list will be installed using the yum module.

roles / common / tasks / main.yml
 --- - include: install_packages.yml 


(Yes, I really love decomposing roles to separate files with my tasks).

The main.yml file simply includes yaml files, where all tasks described in the tasks folder are described.

In the templates folder there are templates in Jinja2 format (the example with resolv.conf was considered above).

The handlers folder lists actions that can be performed after performing any tasks. Example: we have a piece of task:
 - name: installing Debian packages apt: pkg=fail2ban update_cache=yes when: (ansible_os_family == "Debian") notify: - restart fail2ban 


and roles / common / handlers / main.yml handler :
 --- - name restart fail2ban service: name=fail2ban state=restarted 


In this case, after performing the apt task: pkg = fail2ban update_cache = yes , the restart fail2ban handler task will start. In other words, fail2ban will restart as soon as it is installed. Otherwise, if fail2ban is already installed in our system, then the notification and launch of the handler will be ignored)

In the vars folder, you can specify variables that should be used by default.
/vars/common.yml
 --- deb_packages: - curl - fail2ban - vim - git - htop - atop - python-pycurl - sudo rh_packages: - curl - epel-release - vim - git - fail2ban - htop - atop - python-pycurl - sudo 


Test-kitchen + serverspec.



Resources used:

serverspec.org/resource_types.html

github.com/test-kitchen/test-kitchen
github.com/portertech/kitchen-docker
github.com/neillturner/kitchen-verifier-serverspec
github.com/neillturner/kitchen-ansible
github.com/neillturner/omnibus-ansible

Test-kitchen is a tool for integration testing. It prepares the environment for testing, allows you to quickly launch the container / virtual machine and test the playbook / role.
Able to work with vagrant. but we will use docker as a provider.
Installed as a gem, you can use gem install test-kitchen , but I prefer to use the bundler. To do this, you need to create a Gemfile in the project folder and register all the gems and their versions in it.
 source 'https://rubygems.org' gem 'net-ssh','~> 2.9' gem 'serverspec' gem 'test-kitchen' gem 'kitchen-docker' gem 'kitchen-ansible' gem 'kitchen-verifier-serverspec' 


It is very important to specify the version of the net-ssh heme, since with the newer version test-kitchen will probably not work.
Now you need to run bundle install and wait until all the gems with dependencies are installed.
In the folder with the project do kitchen init. A .kitchen.yml file will appear in the folder, which should be approximated as follows:
 --- driver: name: docker provisioner: name: ansible_playbook hosts: localhost require_chef_for_busser: false require_ansible_omnibus: true use_sudo: true platforms: - name: ubuntu-14.04 driver_config: image: vbatuev/ubuntu-rvm - name: debian-8 driver_config: image: vbatuev/debian-rvm verifier: name: serverspec additional_serverspec_command: source $HOME/.rvm/scripts/rvm suites: - name: Common provisioner: name: ansible_playbook playbook: test/integration/default.yml verifier: patterns: - roles/common/spec/common_spec.rb 


At this stage, I had difficulty running the serverspec in the container, so I had to apply a small workaround.
All images are compiled by me and uploaded to dockerhub , in each image, the kitchen user is started, from which tests are run, and rvm is installed with the ruby ​​2.3 version.
The additional_serverspec_command parameter indicates that we will use rvm. This is a way in which dances with a tambourine around ruby ​​versions in standard repositories, gem dependencies, and rspec launch are not needed. Otherwise, with the launch of serverspec tests will have to sweat.
The fact is that kitchen-verifier-serverspec is still quite damp. While writing an article, I had to send several bug reports and PR to the author.

In the suites section, we specify a playbook with a role that will be checked.
playbook: test / integration / default.yml
 --- - hosts: localhost sudo: yes roles: - common 


and patterns for test serverspec.
  verifier: patterns: - roles/common/spec/common_spec.rb 


What the test looks like:
common_spec.rb
 require '/tmp/kitchen/roles/common/spec/spec_helper.rb' describe package( 'curl' ) do it { should be_installed } end 


It is also very important to specify in the header require exactly this way. Otherwise, he will not find and will not work.

spec_helper.rb
 require 'serverspec' set :backend, :exec 


A complete list of what serverspec can check is listed here .

Commands:

kitchen test - runs all stages of the test.
kitchen converge - launches a playbook in a container.
kitchen verify - starts the serverspec.

The results should be something like this:

When performing a playbook:
  Going to invoke ansible-playbook with: ANSIBLE_ROLES_PATH=/tmp/kitchen/roles sudo -Es ansible-playbook -i /tmp/kitchen/hosts -c local -M /tmp/kitchen/modules /tmp/kitchen/default.yml [WARNING]: log file at ./logs/ansible.log is not writeable and we cannot create it, aborting [DEPRECATION WARNING]: Instead of sudo/sudo_user, use become/become_user and make sure become_method is 'sudo' (default). This feature will be removed in a future release. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg. PLAY *************************************************************************** TASK [setup] ******************************************************************* ok: [localhost] TASK [common : include] ******************************************************** included: /tmp/kitchen/roles/common/tasks/install_packages.yml for localhost TASK [common : install {{ item }} pkgs] **************************************** changed: [localhost] => (item=[u'curl', u'fail2ban', u'git', u'vim']) TASK [common : install {{ item }} packages] ************************************ skipping: [localhost] => (item=[]) TASK [common : include] ******************************************************** included: /tmp/kitchen/roles/common/tasks/create_users.yml for localhost TASK [common : Create admin users] ********************************************* TASK [common : include] ******************************************************** included: /tmp/kitchen/roles/common/tasks/delete_users.yml for localhost TASK [common : Delete users] *************************************************** ok: [localhost] => (item={u'name': u'testuser'}) RUNNING HANDLER [common : start fail2ban] ************************************** changed: [localhost] PLAY RECAP ********************************************************************* localhost : ok=7 changed=2 unreachable=0 failed=0 Finished converging <Common-ubuntu-1404> (3m58.17s). 


When starting serverspec:
  Running Serverspec Package "curl" should be installed Package "vim" should be installed Package "fail2ban" should be installed Package "git" should be installed Finished in 0.12682 seconds (files took 0.40257 seconds to load) 4 examples, 0 failures Finished verifying <Common-ubuntu-1404> (0m0.93s). 


If everything went well, it means that we have just prepared the first stage for testing playbooks and ansible roles. In the next part, we will look at how to add even more automation to test Ansible infrastructure code using such a great tool as Jenkins.

How do you check your playbooks?

Author: DevOps admin Southbridge - Victor Batuev.

Source: https://habr.com/ru/post/303762/


All Articles