📜 ⬆️ ⬇️

Amazon + Ansible

image + image

In this article I want to talk about some of the techniques for solving problems of the ansible + amazon bundle, maybe it will be useful for someone or someone will prompt solutions better. There is already a lot of information about installing / configuring ansible information, so I’ll skip it. I can’t add anything original about working with amazon either. So let's get started.

Introduction


They asked to help with a one-time setup of the project, specifically the part of ansible + amazon, as well as the configuration of servers based on ubuntu and services. Requirements for the projects were exhibited such:


Note

The customer wished that he didn’t need to make changes to already working servers, if something needed to be changed - they change the creation scripts, demolish the old and create a new one ... The owner is the master.
')
We work

For most tasks, you can use the ec2 module. The module is wonderful :), with a lot of “sub-modules” you can read their description here: / usr / share / ansible / cloud (for ubuntu / debian).
In the inventory file ansible (hosts) you need to register only 1 host:

[local] localhost 


I started with the simplest, with windows server. I will give an example playbook:

 - hosts: localhost connection: local gather_facts: False vars: hostname: Windows ec2_access_key: "Secret" ec2_secret_key: "Secret_key” instance_type: "t2.micro" image: "ami-xxxxxxxx" group: "launch-wizard-1" region: "us-west-2" tasks: - name: make one instance ec2: image={{ image }} instance_type={{ instance_type }} aws_access_key={{ ec2_access_key }} aws_secret_key={{ ec2_secret_key }} instance_tags='{ "Name":"{{ hostname }}" }' region={{ region }} group={{ group }} wait=true 


Everything is simple, ansible using your aws_access_key and aws_secret_key sends a request to amazon to create a machine named hostname = "Windows" of type instance_type = "t2.micro" from the previously created image image = "ami-xxxxxxxx" in the region = "us- west-2 ”and assigns sequrity group group =“ launch-wizard-1 ”and nothing else.

Further more difficult, we will do nginx backends. To begin, all the same, create a machine, but you need to work with it in the same playbook.
Create an amazon keypair in the office, let's call it, for example, aws_ansible. Download the key and copy it to the user’s ~ / .ssh / id_rsa from which you are launching playbooks.

I will give an example of a playback creating a backend:

 - hosts: localhost connection: local gather_facts: False vars: hostname: Nginx_nodejs ec2_access_key: “Secret" ec2_secret_key: "Secret_key" keypair: "aws_ansible" instance_type: "t2.micro" image: "ami-33db9803" group: "launch-wizard-1" region: "us-west-2" tasks: - name: make one instance ec2: image={{ image }} instance_type={{ instance_type }} aws_access_key={{ ec2_access_key }} aws_secret_key={{ ec2_secret_key }} keypair={{ keypair }} instance_tags='{ "Name":"{{ hostname }}" , "Group":"nginx_backend" }' region={{ region }} group={{ group }} wait=true register: ec2_info - debug: var=ec2_info - debug: var=item with_items: ec2_info.instance_ids - add_host: hostname={{ item.public_ip }} groupname=ec2hosts with_items: ec2_info.instances - name: wait for instances to listen on port:22 wait_for: state=started host={{ item.public_dns_name }} port=22 with_items: ec2_info.instances - hosts: ec2hosts gather_facts: True user: ubuntu sudo: True vars: connections : "4096" tasks: - include: nginx/tasks/setup.yml handlers: - name: restart nginx action: service name=nginx state=restarted - hosts: ec2hosts gather_facts: True user: ubuntu sudo: True tasks: - include: nodejs/tasks/setup.yml 


Now what has changed:
We have indicated that for this machine use our key key key: "aws_ansible"
Indicated the image of a clean ubuntu image: "ami-33db9803".
With the help of registr and debug, we received public_ip of a new machine and recorded it in a temporary inventory in the ec2hosts group, it is impossible to record a file from the playbook into the hosts (I did not find how).
The next action “wait for instances to listen on port: 22” is waiting for ssh to become available.
And after all this, we execute the usual scripts with the usual server, in my case the installation / configuration of nginx and nodejs

I also added the Tag "Group": "nginx_backend", this is necessary in order to work with all backends at once. How? There is a script suitable for inventory of amazon servers in ansible. You can read about it, as well as download it here docs.ansible.com/intro_dynamic_inventory.html#id6 .

Great, but my situation is not much different, I need to make upstream in nginx with an unknown number of backends in advance. Having overcome the vastness of documentation by ansible, I did not find how to make dynamic lists. That is, to substitute the ip backend dynamically - please, but to change their number ... As always, the old method came to the rescue, wrote a python bicycle. not a big script that is called from the playbook before the nginx configuration and generates a config with upstream.

Listing:

 #!/usr/bin/env python import sys, os from commands import * group = '"tag_Group_nginx_backend": [' template = "/etc/ansible/playbooks/nginx/templates/balance.conf.j2" list_ip = [] #Create ec2_list data = getoutput("/etc/ansible/ec2.py --refresh-cache") flag = 0 for line in data.split("\n"): if flag: if line.strip() != "],": list_ip.append(line.strip().strip(",").strip("\"")) else: break if line.strip() == group: flag = 1 f = open(template, 'w') f.write('''# upstream list upstream backend {''') f.close() for ip in list_ip: f = open(template, 'a') f.write(''' server '''+ip+''':80 weight=3 fail_timeout=15s;''') f.close() f = open(template, 'a') f.write(''' }''') f.close() 


Another problem was with the redis master, its ip needed to be registered for each slave. I decided to do it with include_vars.

When creating a wizard before checking the availability of ssh, I do this:

  - replace: dest={{ redis_master_ip }} regexp='^(\s+)(master\:)\s(.*)$' replace='\1\2 {{ item.public_ip }}' with_items: ec2_info.instances 

In the variables indicated:

  redis_master_ip: "/etc/ansible/playbooks/redis/files/master_ip.yml" 

The file itself should initially be and looks like this:

  master: 1.2.3.4 

Then in the redis slave settings add:

 - name: Get master IP include_vars: "{{ redis_master_ip }}" 


Use the {{master}} variable in the template.

Sequrity group is created simply, use the ec2_group module:

 - hosts: localhost connection: local tasks: - name: nginx ec2 group local_action: module: ec2_group name: nginx description: an nginx EC2 group region: us-west-2 aws_secret_key: "Secret" aws_access_key: "Secret" rules: - proto: tcp from_port: 80 to_port: 80 cidr_ip: 192.168.0.0/24 - proto: tcp from_port: 22 to_port: 22 cidr_ip: 0.0.0.0/0 rules_egress: - proto: all cidr_ip: 0.0.0.0/0 


It turned out to be more difficult with queues, there was no module for them, I even tried to finish someone's attempts to write it. But quickly came to his senses and did through the cloudformation.

This is a playbook:

 - hosts: localhost connection: local gather_facts: False vars: sqs_access_key: “Secret" sqs_secret_key: "Secret" region: "us-west-2” tasks: - name: launch some aws services cloudformation: > stack_name="TEST" region={{ region }} template=files/cloudformation.json <\code>   template: <code> { "AWSTemplateFormatVersion" : "2010-09-09", "Description" : "AWS CloudFormation SQS” "Resources" : { "MyQueue" : { "Type" : "AWS::SQS::Queue" } }, "Outputs" : { "QueueURL" : { "Description" : "URL of newly created SQS Queue", "Value" : { "Ref" : "MyQueue" } }, "QueueARN" : { "Description" : "ARN of newly created SQS Queue", "Value" : { "Fn::GetAtt" : ["MyQueue", "Arn"]} } } } 


Summary

The popularity of cloud solutions and configuration management systems is growing, but I spent a fair amount of time looking for options for a particular solution, collecting information, and so on. Hence the idea to write this article.

Author: Roman Burnashev, Chief System Administrator, centos-admin.ru

Source: https://habr.com/ru/post/242083/


All Articles