📜 ⬆️ ⬇️

Ansible Handbook

orchestra configuration management


This practical guide will introduce you to Ansible. You will need a virtual or real machine that will act as a host for Ansible. The environment for Vagrant comes with this manual.


Ansible is a software solution for remote configuration management. It allows you to configure remote machines. Its main difference from other similar systems is that Ansible uses the existing SSH infrastructure, while others (chef, puppet, etc.) require the installation of a special PKI environment.


The manual covers such topics:


  1. Ansible and Vagrant installation
  2. Inventory file
  3. Modules shell, copy, fact collection, variables
  4. Run on host group
  5. Playbooks
  6. Example: raising a cluster, installing and configuring Apache and HAproxy load balancer
  7. Error handling, rollback
  8. Configuration templates
  9. Roles

Ansible uses the so-called push mode: the configuration is “pushed” (push) from the host machine. Other CM-systems usually do the opposite - the nodes pull the configuration from the main machine.


This mode is interesting because you do not need to have a publicly accessible host machine to remotely configure nodes; these nodes must be accessible (we will see later that hidden nodes can also receive a configuration).


What you need for Ansible


The following Python modules are required.



On Debian / Ubuntu, run:


sudo apt-get install python-yaml python-jinja2 python-paramiko python-crypto 

You should also have a pair of keys in ~ / .ssh.


Ansible installation


From source


The devel branch is always stable, so use it. You may need to install git ( sudo apt-get install git on Debian / Ubuntu).


 git clone git://github.com/ansible/ansible.git cd ./ansible 

Now you can load the Ansible environment.


 source ./hacking/env-setup 

From deb package


 sudo apt-get install make fakeroot cdbs python-support git clone git://github.com/ansible/ansible.git cd ./ansible make deb sudo dpkg -i ../ansible_1.1_all.deb (version may vary) 

This tutorial assumes that you used this particular method.


Install Vagrant


Vagrant allows you to easily create virtual machines and run them on VirtualBox. Vagrantfile comes with a guide.


To run Vagrant you need to install:



Now initialize the virtual machine with the following command. Keep in mind that you do not need to download any "box" manually. This manual already contains a ready-made Vagrantfile , it contains everything you need to work.


 vagrant up 

and pour yourself some coffee (if you use vagrant-hostmaster, then you will need to enter the root password). If something went wrong, check out the tutorial on Vagrant .


Adding SSH keys in a virtual machine


To continue, you need to add your keys to the authorized_keys root on the virtual machine. This is not necessary (Ansible can use sudo and password authentication), but it will be so much easier.


Ansible is perfect for this task, so we use it. However, I will not explain anything yet. Just trust me.


 ansible-playbook -c paramiko -i step-00/hosts step-00/setup.yml --ask-pass --sudo 

Enter vagrant as your password . If errors of "Connections refused" occur, then check the firewall settings.


Now add your keys to ssh-agent ( ssh-add ).


Inventory


Now we need to prepare the inventory file. The default location is /etc/ansible/hosts .
But you can configure Ansible to use a different path. To do this, use the environment variable ( ANSIBLE_HOSTS ) or the -i flag.


We have created such an inventory file:


 host0.example.org ansible_ssh_host=192.168.33.10 ansible_ssh_user=root host1.example.org ansible_ssh_host=192.168.33.11 ansible_ssh_user=root host2.example.org ansible_ssh_host=192.168.33.12 ansible_ssh_user=root 

ansible_ssh_host is a special variable that contains the IP address of the host to which the connection will be created. In this case, it is not necessary if you use gem vagrant-hostmaster. Also, you will need to change IP addresses if you have installed and configured your virtual machine with other addresses.


ansible_ssh_user is another special variable that tells Ansible to connect under the specified account (user). By default, Ansible uses your current account, or another default value specified in ~ / .ansible.cfg ( remote_user ).


Check


Now that Ansible is installed, let's check that everything works:


 ansible -m ping all -i step-01/hosts 

Here Ansible will try to start the ping module (more on the modules later) on each host. The output should be something like this:


 host0.example.org | success >> { "changed": false, "ping": "pong" } host1.example.org | success >> { "changed": false, "ping": "pong" } host2.example.org | success >> { "changed": false, "ping": "pong" } 

Fine! All three hosts are alive and well, and Ansible can communicate with them.


Communication with nodes


Now we are ready. Let's play with the already familiar team from the previous section: ansible . This command is one of three commands that Ansible uses to interact with nodes.


Do something useful


In the last command, -m ping meant "use the ping module." This is one of the many modules available in Ansible. The ping module ping very simple; it does not require any arguments. Modules requiring arguments can get them through -a . Let's take a look at several modules.


Shell module


This module allows you to run shell commands on a remote host:


 ansible -i step-02/hosts -m shell -a 'uname -a' host0.example.org 

The output should be like:


 host0.example.org | success | rc=0 >> Linux host0.example.org 3.2.0-23-generic-pae #36-Ubuntu SMP Tue Apr 10 22:19:09 UTC 2012 i686 i686 i386 GNU/Linux 

Easy!


Copy module


The copy module allows you to copy a file from a management machine to a remote node. Imagine that we need to copy our /etc/motd to the /tmp node:


 ansible -i step-02/hosts -m copy -a 'src=/etc/motd dest=/tmp/' host0.example.org 

Conclusion:


 host0.example.org | success >> { "changed": true, "dest": "/tmp/motd", "group": "root", "md5sum": "d41d8cd98f00b204e9800998ecf8427e", "mode": "0644", "owner": "root", "size": 0, "src": "/root/.ansible/tmp/ansible-1362910475.9-246937081757218/motd", "state": "file" } 

Ansible (more precisely, the copy module running on the node) responded with a bunch of useful information in JSON format. We will see later how this can be used.


Ansible has a huge
a list of modules that covers almost everything that can be done in the system. If you have not found a suitable module, then writing your own module is a fairly simple task (and you don’t have to write it in Python, as long as it understands JSON).


Many hosts, one team


Everything above was great, but we need to manage multiple hosts. Let's try. Suppose we want to gather facts about a node and, for example, we want to know which version of Ubuntu is installed on the nodes. This is pretty easy:


 ansible -i step-02/hosts -m shell -a 'grep DISTRIB_RELEASE /etc/lsb-release' all 

all means "all hosts in the inventory file". The output will be something like this:


 host1.example.org | success | rc=0 >> DISTRIB_RELEASE=12.04 host2.example.org | success | rc=0 >> DISTRIB_RELEASE=12.04 host0.example.org | success | rc=0 >> DISTRIB_RELEASE=12.04 

More facts


Simply and easily. However, if we need more information (IP addresses, RAM sizes, etc.), this approach can quickly be inconvenient. The solution is to use the setup module. He specializes in collecting facts from nodes.


Try:


 ansible -i step-02/hosts -m setup host0.example.org 

answer:


 "ansible_facts": { "ansible_all_ipv4_addresses": [ "192.168.0.60" ], "ansible_all_ipv6_addresses": [], "ansible_architecture": "x86_64", "ansible_bios_date": "01/01/2007", "ansible_bios_version": "Bochs" }, ---snip--- "ansible_virtualization_role": "guest", "ansible_virtualization_type": "kvm" }, "changed": false, "verbose_override": true 

The output has been shortened for simplicity, but you can learn a lot from this information. You can also filter the keys if you are interested in something specific.


For example, you need to find out how much memory is available on all hosts. It's easy: run ansible -i step-02/hosts -m setup -a 'filter=ansible_memtotal_mb' all :


 host2.example.org | success >> { "ansible_facts": { "ansible_memtotal_mb": 187 }, "changed": false, "verbose_override": true } host1.example.org | success >> { "ansible_facts": { "ansible_memtotal_mb": 187 }, "changed": false, "verbose_override": true } host0.example.org | success >> { "ansible_facts": { "ansible_memtotal_mb": 187 }, "changed": false, "verbose_override": true } 

Notice that the nodes did not answer in the order in which they answered above. Ansible communicates with hosts in parallel!


By the way, when using the setup module, you can specify * in the filter= expression. As in the shell.


Choice of hosts


We have seen that all means "all hosts", but in Ansible there is
A bunch of other ways to choose hosts :



Host grouping


Hosts in inventory can be grouped. For example, you can create a debian group, a web-servers group, a production group, and so on.


 [debian] host0.example.org host1.example.org host2.example.org 

You can even cut:


 [debian] host[0-2].example.org 

If you want to set up child groups, use [groupname:children] and add child groups to it. For example, we have different Linux distributions, they can be organized as follows:


 [ubuntu] host0.example.org [debian] host[1-2].example.org [linux:children] ubuntu debian 

Setting variables


You can add variables for hosts in several places: in the inventory file, host variable files, group variable files, etc.


Usually I set all the variables in the files of the group / host variables (more on this later). However, I often use variables directly in the inventory file, for example, ansible_ssh_host , which specifies the IP address of the host. By default, Ansible rezolvit host names when connecting over SSH. But when you initialize a host, it may not have an IP address yet. ansible_ssh_host will be useful in this case.


When using the ansible-playbook command (and not the usual ansible ), variables can be set using the - --extra-vars (or -e ) flag. We'll talk about the ansible-playbook team in the next step.


ansible_ssh_port , as you might have guessed, is used to specify an SSH connection port.


 [ubuntu] host0.example.org ansible_ssh_host=192.168.0.12 ansible_ssh_port=2222 

Ansible searches for additional variables in variable group and host files. It will search for these files in the group_vars and host_vars directories, inside the directory where the main inventory file is located.


Ansible will search for files by name. For example, when using the inventory file mentioned earlier, Ansible will search for host0.example.org variables in the files:



If these files do not exist, nothing will happen, but if they exist, they will be used.


Now that we have learned about modules, inventory, and variables, let's finally find out about the real power of Ansible with playbooks.


Ansible Playbooks


The concept of playbooks is very simple: it's just a set of Ansible commands (tasks, tasks), similar to those that we did with the ansible utility. These tasks are aimed at specific sets of nodes / groups.


Example with Apache (aka "Hello World!" In Ansible)


We continue with the assumption that your inventory file looks like this (let's call it hosts ):


 [web] host1.example.org 

and all hosts are Debian based systems.


Note: remember that you can (and in our exercise we are doing this) use ansible_ssh_host to set the real IP address of the host. You can also change inventory and use a real hostname. In any case, use the machine with which it is safe to experiment. On real hosts, we also add ansible_ssh_user=root to avoid potential problems with different default configurations.


Let's build a playbook that installs Apache on web group machines.


 - hosts: web tasks: - name: Installs apache web server apt: pkg=apache2 state=installed update_cache=true 

We just need to say what we want to do using the correct Ansible modules. Here we use the apt module, which can install Debian packages. We also ask this module to update the cache.


We need a name for this task. This is not necessary, but desirable for your own convenience.


Well, overall it was pretty easy! Now you can start the playbook (let's call it apache.yml ):


 ansible-playbook -i step-04/hosts -l host1.example.org step-04/apache.yml 

Here step-04/hosts is the inventory file, -l limits the launch to host1.example.org ,
and apache.yml is our playbook.


When you run the command, there will be an output similar to this:


 PLAY [web] ********************* GATHERING FACTS ********************* ok: [host1.example.org] TASK: [Installs apache web server] ********************* changed: [host1.example.org] PLAY RECAP ********************* host1.example.org : ok=2 changed=1 unreachable=0 failed=0 

Note: you may notice a cow passing by, if you have cowsay installed :-) If you don't like it, you can disable it like this: export ANSIBLE_NOCOWS="1" .


Let's analyze the output line by line.


 PLAY [web] ********************* 

Ansible tells us that play is in the web group. Play is a set of Ansible instructions related to the host. If we had another -host: blah in the playbook, it would also be outputted (but after the first play is completed).


 GATHERING FACTS ********************* ok: [host1.example.org] 

Remember when we used the setup module? Before each playback, Ansible runs it on each host and collects the facts. If this is not required (say, because you do not need any information about the host), you can add gather_facts: no under the host line (at the same level as the tasks: .


 TASK: [Installs apache web server] ********************* changed: [host1.example.org] 

Now the most important thing: our first and only task is running, and since it says changed , we know that it has changed something on host1.example.org .


 PLAY RECAP ********************* host1.example.org : ok=2 changed=1 unreachable=0 failed=0 

Finally, Ansible displays a squeeze of what happened: two tasks were performed, and one of them changed something on the host (this was our apache task; the setup module changes nothing).


Let's run it again and see what happens:


 $ ansible-playbook -i step-04/hosts -l host1.example.org step-04/apache.yml PLAY [web] ********************* GATHERING FACTS ********************* ok: [host1.example.org] TASK: [Installs apache web server] ********************* ok: [host1.example.org] PLAY RECAP ********************* host1.example.org : ok=2 changed=0 unreachable=0 failed=0 

Now changed is '0'. This is completely normal and is one of the main features of Ansible: a playbook will do something only if there is something to do. This is called idempotency . This means that you can run a playbook as many times as you like, but in the end we will have the machine in the same state (well, only if you are not going crazy with the shell module, but here Ansible cannot do anything about it).


Improving apache set


We have installed apache, now let's configure virtualhost.


Playbook Upgrade


We need only one virtual host on the server, but we want to change the default one to something more specific. Therefore, we will have to delete the current virtualhost, send our virtualhost, activate it and restart apache.


Let's create a directory called files and add our configuration for host1.example.org, let's call it awesome-app :


 <VirtualHost *:80> DocumentRoot /var/www/awesome-app Options -Indexes ErrorLog /var/log/apache2/error.log TransferLog /var/log/apache2/access.log </VirtualHost> 

Now a small resetting of the playbook and everything is ready:


 - hosts: web tasks: - name: Installs apache web server apt: pkg=apache2 state=installed update_cache=true - name: Push default virtual host configuration copy: src=files/awesome-app dest=/etc/apache2/sites-available/ mode=0640 - name: Deactivates the default virtualhost command: a2dissite default - name: Deactivates the default ssl virtualhost command: a2dissite default-ssl - name: Activates our virtualhost command: a2ensite awesome-app notify: - restart apache handlers: - name: restart apache service: name=apache2 state=restarted 

Go:


 $ ansible-playbook -i step-05/hosts -l host1.example.org step-05/apache.yml PLAY [web] ********************* GATHERING FACTS ********************* ok: [host1.example.org] TASK: [Installs apache web server] ********************* ok: [host1.example.org] TASK: [Push default virtual host configuration] ********************* changed: [host1.example.org] TASK: [Deactivates the default virtualhost] ********************* changed: [host1.example.org] TASK: [Deactivates the default ssl virtualhost] ********************* changed: [host1.example.org] TASK: [Activates our virtualhost] ********************* changed: [host1.example.org] NOTIFIED: [restart apache] ********************* changed: [host1.example.org] PLAY RECAP ********************* host1.example.org : ok=7 changed=5 unreachable=0 failed=0 

Cool! Well, if you think about it, we are a little ahead of the events. Is it not necessary to check the configuration correctness before restarting apache? In order not to disrupt the service in case the configuration contains an error.


Restart in case of configuration error


We installed apache, changed virtualhost and restarted the server. But what if we only want to restart the server when the configuration is correct?


Roll back if there are problems


Ansible contains a cool feature: it will stop all processing if something goes wrong. We use this feature to stop the playbook when the configuration is not valid.


Let's change the awesome-app virtual host configuration file and break it:


  <VirtualHost *:80> RocumentDoot /var/www/awesome-app Options -Indexes ErrorLog /var/log/apache2/error.log TransferLog /var/log/apache2/access.log </VirtualHost> 

As I said, if the task cannot be completed, the processing stops. So you need to make sure that the configuration is valid before restarting the server. We will also start by adding a virtual host before removing the default virtual host, so a subsequent restart (perhaps done directly on the server) will not break apache.


It was necessary to do it at the very beginning. Since we have already launched this playbook, the default virtual host has already been deactivated. No problem: this playbook can be used on other innocent hosts, so let's protect them.


 - hosts: web tasks: - name: Installs apache web server apt: pkg=apache2 state=installed update_cache=true - name: Push future default virtual host configuration copy: src=files/awesome-app dest=/etc/apache2/sites-available/ mode=0640 - name: Activates our virtualhost command: a2ensite awesome-app - name: Check that our config is valid command: apache2ctl configtest - name: Deactivates the default virtualhost command: a2dissite default - name: Deactivates the default ssl virtualhost command: a2dissite default-ssl notify: - restart apache handlers: - name: restart apache service: name=apache2 state=restarted 

Go:


 $ ansible-playbook -i step-06/hosts -l host1.example.org step-06/apache.yml PLAY [web] ********************* GATHERING FACTS ********************* ok: [host1.example.org] TASK: [Installs apache web server] ********************* ok: [host1.example.org] TASK: [Push future default virtual host configuration] ********************* changed: [host1.example.org] TASK: [Activates our virtualhost] ********************* changed: [host1.example.org] TASK: [Check that our config is valid] ********************* failed: [host1.example.org] => {"changed": true, "cmd": ["apache2ctl", "configtest"], "delta": "0:00:00.045046", "end": "2013-03-08 16:09:32.002063", "rc": 1, "start": "2013-03-08 16:09:31.957017"} stderr: Syntax error on line 2 of /etc/apache2/sites-enabled/awesome-app: Invalid command 'RocumentDoot', perhaps misspelled or defined by a module not included in the server configuration stdout: Action 'configtest' failed. The Apache error log may have more information. FATAL: all hosts have already failed -- aborting PLAY RECAP ********************* host1.example.org : ok=4 changed=2 unreachable=0 failed=1 

As you noticed, apache2ctl returns error code 1. Ansible sees this and stops working. Fine!


Hmmm, no, not great ... Our virtual host was added anyway. Any subsequent attempt to restart Apache will swear on the configuration on and off. So we need a way to catch errors and return to a working state.


(comment: translation: habrauser @clickfreak in the comments suggests looking at the special feature Ansible 2.x).


Use of conditions


We installed Apache, added a virtual host, and restarted the server. But we want to return to working condition if something went wrong.


Return in case of problems


There is no magic here. The past mistake is not Ansible's fault. This is not a backup system, and it does not know how to deny everything to past states. Playbook security is your responsibility. Ansible just doesn't know how to undo the a2ensite awesome-app effect.


As mentioned earlier, if the task cannot be completed - the processing stops ... but we can accept an error (and we need to do it ). So we will do: continue processing in case of an error, but only to return everything to the working state.


 - hosts: web tasks: - name: Installs apache web server apt: pkg=apache2 state=installed update_cache=true - name: Push future default virtual host configuration copy: src=files/awesome-app dest=/etc/apache2/sites-available/ mode=0640 - name: Activates our virtualhost command: a2ensite awesome-app - name: Check that our config is valid command: apache2ctl configtest register: result ignore_errors: True - name: Rolling back - Restoring old default virtualhost command: a2ensite default when: result|failed - name: Rolling back - Removing our virtualhost command: a2dissite awesome-app when: result|failed - name: Rolling back - Ending playbook fail: msg="Configuration file is not valid. Please check that before re-running the playbook." when: result|failed - name: Deactivates the default virtualhost command: a2dissite default - name: Deactivates the default ssl virtualhost command: a2dissite default-ssl notify: - restart apache handlers: - name: restart apache service: name=apache2 state=restarted 

The register keyword records the output of the apache2ctl configtest command (exit
status, stdout, stderr, ...), and when: result|failed checks if the variable contains
( result ) status failed.


Go:


 $ ansible-playbook -i step-07/hosts -l host1.example.org step-07/apache.yml PLAY [web] ********************* GATHERING FACTS ********************* ok: [host1.example.org] TASK: [Installs apache web server] ********************* ok: [host1.example.org] TASK: [Push future default virtual host configuration] ********************* ok: [host1.example.org] TASK: [Activates our virtualhost] ********************* changed: [host1.example.org] TASK: [Check that our config is valid] ********************* failed: [host1.example.org] => {"changed": true, "cmd": ["apache2ctl", "configtest"], "delta": "0:00:00.051874", "end": "2013-03-10 10:50:17.714105", "rc": 1, "start": "2013-03-10 10:50:17.662231"} stderr: Syntax error on line 2 of /etc/apache2/sites-enabled/awesome-app: Invalid command 'RocumentDoot', perhaps misspelled or defined by a module not included in the server configuration stdout: Action 'configtest' failed. The Apache error log may have more information. ...ignoring TASK: [Rolling back - Restoring old default virtualhost] ********************* changed: [host1.example.org] TASK: [Rolling back - Removing our virtualhost] ********************* changed: [host1.example.org] TASK: [Rolling back - Ending playbook] ********************* failed: [host1.example.org] => {"failed": true} msg: Configuration file is not valid. Please check that before re-running the playbook. FATAL: all hosts have already failed -- aborting PLAY RECAP ********************* host1.example.org : ok=7 changed=4 unreachable=0 failed=1 

It seems everything works as it should. Let's try restarting apache:


 $ ansible -i step-07/hosts -m service -a 'name=apache2 state=restarted' host1.example.org host1.example.org | success >> { "changed": true, "name": "apache2", "state": "started" } 

Now our Apache is protected from configuration errors. Remember, variables can be used almost everywhere, so this playbook can be used for apache and in other cases. Write once and use everywhere.


Deploy a site with Git


We installed Apache, added a virtual host, and restarted the server safely. git .


git


, , , . git . - . , ansible-pull .


, . PHP, libapache2-mod-php5 . git , , , git .


:


  ... - name: Installs apache web server apt: pkg=apache2 state=installed update_cache=true - name: Installs php5 module apt: pkg=libapache2-mod-php5 state=installed - name: Installs git apt: pkg=git state=installed ... 

Ansible . , :


 - hosts: web tasks: - name: Updates apt cache apt: update_cache=true - name: Installs necessary packages apt: pkg={{ item }} state=latest with_items: - apache2 - libapache2-mod-php5 - git - name: Push future default virtual host configuration copy: src=files/awesome-app dest=/etc/apache2/sites-available/ mode=0640 - name: Activates our virtualhost command: a2ensite awesome-app - name: Check that our config is valid command: apache2ctl configtest register: result ignore_errors: True - name: Rolling back - Restoring old default virtualhost command: a2ensite default when: result|failed - name: Rolling back - Removing out virtualhost command: a2dissite awesome-app when: result|failed - name: Rolling back - Ending playbook fail: msg="Configuration file is not valid. Please check that before re-running the playbook." when: result|failed - name: Deploy our awesome application git: repo=https://github.com/leucos/ansible-tuto-demosite.git dest=/var/www/awesome-app tags: deploy - name: Deactivates the default virtualhost command: a2dissite default - name: Deactivates the default ssl virtualhost command: a2dissite default-ssl notify: - restart apache handlers: - name: restart apache service: name=apache2 state=restarted 

:


 $ ansible-playbook -i step-08/hosts -l host1.example.org step-08/apache.yml PLAY [web] ********************* GATHERING FACTS ********************* ok: [host1.example.org] TASK: [Updates apt cache] ********************* ok: [host1.example.org] TASK: [Installs necessary packages] ********************* changed: [host1.example.org] => (item=apache2,libapache2-mod-php5,git) TASK: [Push future default virtual host configuration] ********************* changed: [host1.example.org] TASK: [Activates our virtualhost] ********************* changed: [host1.example.org] TASK: [Check that our config is valid] ********************* changed: [host1.example.org] TASK: [Rolling back - Restoring old default virtualhost] ********************* skipping: [host1.example.org] TASK: [Rolling back - Removing out virtualhost] ********************* skipping: [host1.example.org] TASK: [Rolling back - Ending playbook] ********************* skipping: [host1.example.org] TASK: [Deploy our awesome application] ********************* changed: [host1.example.org] TASK: [Deactivates the default virtualhost] ********************* changed: [host1.example.org] TASK: [Deactivates the default ssl virtualhost] ********************* changed: [host1.example.org] NOTIFIED: [restart apache] ********************* changed: [host1.example.org] PLAY RECAP ********************* host1.example.org : ok=10 changed=8 unreachable=0 failed=0 

http://192.168.33.11 .


tags: deploy . , . , . . , "deploy" — , . , :


$ ansible-playbook -i step-08/hosts -l host1.example.org step-08/apache.yml -t deploy
X11 forwarding request failed on channel 0


PLAY [web] * ****


GATHERING FACTS * ****
ok: [host1.example.org]


TASK: [Deploy our awesome application] * ****
changed: [host1.example.org]


PLAY RECAP * ****
host1.example.org: ok=2 changed=1 unreachable=0 failed=0


-


-. .


inventory


, - , . inventory:


 [web] host1.example.org ansible_ssh_host=192.168.33.11 ansible_ssh_user=root host2.example.org ansible_ssh_host=192.168.33.12 ansible_ssh_user=root [haproxy] host0.example.org ansible_ssh_host=192.168.33.10 ansible_ssh_user=root 

, ansible_ssh_host IP, . /etc/hosts ( ).


-


. :


 $ ansible-playbook -i step-09/hosts step-09/apache.yml PLAY [web] ********************* GATHERING FACTS ********************* ok: [host2.example.org] ok: [host1.example.org] TASK: [Updates apt cache] ********************* ok: [host1.example.org] ok: [host2.example.org] TASK: [Installs necessary packages] ********************* ok: [host1.example.org] => (item=apache2,libapache2-mod-php5,git) changed: [host2.example.org] => (item=apache2,libapache2-mod-php5,git) TASK: [Push future default virtual host configuration] ********************* ok: [host1.example.org] changed: [host2.example.org] TASK: [Activates our virtualhost] ********************* changed: [host2.example.org] changed: [host1.example.org] TASK: [Check that our config is valid] ********************* changed: [host2.example.org] changed: [host1.example.org] TASK: [Rolling back - Restoring old default virtualhost] ********************* skipping: [host1.example.org] skipping: [host2.example.org] TASK: [Rolling back - Removing out virtualhost] ********************* skipping: [host1.example.org] skipping: [host2.example.org] TASK: [Rolling back - Ending playbook] ********************* skipping: [host1.example.org] skipping: [host2.example.org] TASK: [Deploy our awesome application] ********************* ok: [host1.example.org] changed: [host2.example.org] TASK: [Deactivates the default virtualhost] ********************* changed: [host1.example.org] changed: [host2.example.org] TASK: [Deactivates the default ssl virtualhost] ********************* changed: [host2.example.org] changed: [host1.example.org] NOTIFIED: [restart apache] ********************* changed: [host1.example.org] changed: [host2.example.org] PLAY RECAP ********************* host1.example.org : ok=10 changed=5 unreachable=0 failed=0 host2.example.org : ok=10 changed=8 unreachable=0 failed=0 

, , -l host1.example.org . , -l . , web .


web , , , , : -l firsthost:secondhost:... .


-, .


Templates


haproxy . , apache. , - haproxy . How to do it?


HAProxy


Ansible Jinja2 , Python. Jinja2- , Ansible'.


, inventory_name , , {{ inventory_hostname }} Jinja2-. , IP- ethernet- ( Ansible setup ), {{ ansible_eth1['ipv4']['address'] }} .


Jinja2 , .


templates/ Jinja- . haproxy.cfg.j2 . .j2 , .


 global daemon maxconn 256 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms listen cluster bind {{ ansible_eth1['ipv4']['address'] }}:80 mode http stats enable balance roundrobin {% for backend in groups['web'] %} server {{ hostvars[backend]['ansible_hostname'] }} {{ hostvars[backend]['ansible_eth1']['ipv4']['address'] }} check port 80 {% endfor %} option httpchk HEAD /index.php HTTP/1.0 

.


-, {{ ansible_eth1['ipv4']['address'] }} IP eth1.


. -. [web] , backend . . hostvars , (, IP, ) .


, . , . , [web] .


HAProxy playbook


The hardest thing behind. HAproxy :


 - hosts: haproxy tasks: - name: Installs haproxy load balancer apt: pkg=haproxy state=installed update_cache=yes - name: Pushes configuration template: src=templates/haproxy.cfg.j2 dest=/etc/haproxy/haproxy.cfg mode=0640 owner=root group=root notify: - restart haproxy - name: Sets default starting flag to 1 lineinfile: dest=/etc/default/haproxy regexp="^ENABLED" line="ENABLED=1" notify: - restart haproxy handlers: - name: restart haproxy service: name=haproxy state=restarted 

, ? : template . , copy . haproxy .


… . inventory , . , , , haproxy- -. , .


 $ ansible-playbook -i step-10/hosts step-10/apache.yml step-10/haproxy.yml PLAY [web] ********************* GATHERING FACTS ********************* ok: [host1.example.org] ok: [host2.example.org] TASK: [Updates apt cache] ********************* ok: [host1.example.org] ok: [host2.example.org] TASK: [Installs necessary packages] ********************* ok: [host1.example.org] => (item=apache2,libapache2-mod-php5,git) ok: [host2.example.org] => (item=apache2,libapache2-mod-php5,git) TASK: [Push future default virtual host configuration] ********************* ok: [host2.example.org] ok: [host1.example.org] TASK: [Activates our virtualhost] ********************* changed: [host1.example.org] changed: [host2.example.org] TASK: [Check that our config is valid] ********************* changed: [host1.example.org] changed: [host2.example.org] TASK: [Rolling back - Restoring old default virtualhost] ********************* skipping: [host1.example.org] skipping: [host2.example.org] TASK: [Rolling back - Removing out virtualhost] ********************* skipping: [host1.example.org] skipping: [host2.example.org] TASK: [Rolling back - Ending playbook] ********************* skipping: [host1.example.org] skipping: [host2.example.org] TASK: [Deploy our awesome application] ********************* ok: [host2.example.org] ok: [host1.example.org] TASK: [Deactivates the default virtualhost] ********************* changed: [host1.example.org] changed: [host2.example.org] TASK: [Deactivates the default ssl virtualhost] ********************* changed: [host2.example.org] changed: [host1.example.org] NOTIFIED: [restart apache] ********************* changed: [host2.example.org] changed: [host1.example.org] PLAY RECAP ********************* host1.example.org : ok=10 changed=5 unreachable=0 failed=0 host2.example.org : ok=10 changed=5 unreachable=0 failed=0 PLAY [haproxy] ********************* GATHERING FACTS ********************* ok: [host0.example.org] TASK: [Installs haproxy load balancer] ********************* changed: [host0.example.org] TASK: [Pushes configuration] ********************* changed: [host0.example.org] TASK: [Sets default starting flag to 1] ********************* changed: [host0.example.org] NOTIFIED: [restart haproxy] ********************* changed: [host0.example.org] PLAY RECAP ********************* host0.example.org : ok=5 changed=4 unreachable=0 failed=0 

. http://192.168.33.10/ . ! HAproxy: http://192.168.33.10/haproxy?stats .



, , . .


Ansible . ansible_ssh_host inventory, , host_vars group_vars .


HAProxy


HAProxy , . , , HAProxy .


( 0 256). , . , .


.


Group-


haproxy group_vars. , haproxy .


group_vars/haproxy inventory. , . web, group_vars/web .


 haproxy_check_interval: 3000 haproxy_stats_socket: /tmp/sock 

. , , - . ( Python dict) :


 haproxy: check_interval: 3000 stats_socket: /tmp/sock 

It's a matter of taste. . .



, host_vars . host_vars/host1.example.com :


 haproxy_backend_weight: 100 

host_vars/host2.example.com :


 haproxy_backend_weight: 150 

haproxy_backend_weight group_vars/web , -:
host_vars group_vars .



, .


 global daemon maxconn 256 {% if haproxy_stats_socket %} stats socket {{ haproxy_stats_socket }} {% endif %} defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms listen cluster bind {{ ansible_eth1['ipv4']['address'] }}:80 mode http stats enable balance roundrobin {% for backend in groups['web'] %} server {{ hostvars[backend]['ansible_hostname'] }} {{ hostvars[backend]['ansible_eth1']['ipv4']['address'] }} check inter {{ haproxy_check_interval }} weight {{ hostvars[backend]['haproxy_backend_weight'] }} port 80 {% endfor %} option httpchk HEAD /index.php HTTP/1.0 

{% if ... ? , . , - haproxy_stats_socket ( --extra-vars="haproxy_stats_sockets=/tmp/sock" ), .


, !


:


 ansible-playbook -i step-11/hosts step-11/haproxy.yml 

, apache, . . haproxy:


 - hosts: web - hosts: haproxy tasks: - name: Installs haproxy load balancer apt: pkg=haproxy state=installed update_cache=yes - name: Pushes configuration template: src=templates/haproxy.cfg.j2 dest=/etc/haproxy/haproxy.cfg mode=0640 owner=root group=root notify: - restart haproxy - name: Sets default starting flag to 1 lineinfile: dest=/etc/default/haproxy regexp="^ENABLED" line="ENABLED=1" notify: - restart haproxy handlers: - name: restart haproxy service: name=haproxy state=restarted 

Do you see? - . . Ansible web . , haproxy . , Ansible , ansible_eth1 .


!


, , ! . — , . ,
Ansible . — : B A. B, A.



«» Ansible: . , . , , . . "convention over configuration".


:


 roles | |_some_role | |_files | | | |_file1 | |_... | |_templates | | | |_template1.j2 | |_... | |_tasks | | | |_main.yml | |_some_other_file.yml | |_ ... | |_handlers | | | |_main.yml | |_some_other_file.yml | |_ ... | |_vars | | | |_main.yml | |_some_other_file.yml | |_ ... | |_meta | |_main.yml |_some_other_file.yml |_ ... 

Pretty simple.


main.yml . , . .


vars meta . vars , , . . , , — . , , — . «» (, ) . . . Ansible .


meta , . roles .


Apache


, apache .


:




It's simple:


 mkdir -p step-12/roles/apache/{tasks,handlers,files} 

apache.yml main.yml . :


 - name: Updates apt cache apt: update_cache=true - name: Installs necessary packages apt: pkg={{ item }} state=latest with_items: - apache2 - libapache2-mod-php5 - git ... - name: Deactivates the default ssl virtualhost command: a2dissite default-ssl notify: - restart apache 

, .
apache.yml tasks: handlers: .


files/ templates/ . , Ansible , .



step-12/roles/apache/handlers/main.yml :


 - name: restart apache service: name=apache2 state=restarted 


:


 cp step-11/files/awesome-app step-12/roles/apache/files/ 

apache . .



. site.yml , . haproxy :


 - hosts: web roles: - { role: apache } - hosts: haproxy roles: - { role: haproxy } 

. haproxy:


 mkdir -p step-12/roles/haproxy/{tasks,handlers,templates} cp step-11/templates/haproxy.cfg.j2 step-12/roles/haproxy/templates/ 

templates/ .


?:


 ansible-playbook -i step-12/hosts step-12/site.yml 

, "PLAY RECAP":


 host0.example.org : ok=5 changed=2 unreachable=0 failed=0 host1.example.org : ok=10 changed=5 unreachable=0 failed=0 host2.example.org : ok=10 changed=5 unreachable=0 failed=0 

, site.yml . -? ! limit-:


 ansible-playbook -i step-12/hosts -l web step-12/site.yml 

.


( : . ).


')

Source: https://habr.com/ru/post/305400/


All Articles