
About half a year ago, I had to develop a reverse proxy site scheme, from many nodes (n> 20) to several (n <= 3) combat servers. Recently faced in a similar request from a colleague. Therefore, I decided to share and collect everything in the article.
The article level is for beginners.
')
As a result, a simple tool was needed to add new nodes and update the list of domains. Profit from such a solution should be, when using caching on the server, and DNS with geolocation.
Finding information on the topic of reverse-proxy, often comes down to articles on setting up “nginx to apache” (on local apache or on a remote upstream server), CDN proxy services (cloudflare, * cdn, cloudfront, etc.). In this case, it did not quite fit.
The peculiarity is the need to provide many different IP (from different geographic locations) for the domains of one or two servers.
To solve the problem, several VPS were purchased in the required different locations (cheap, thanks to lowendbox.com & lowendstock.com, but with the required bendwich). VPS has so far been used on Centos-6-x32, but as soon as epel rolls out packages for
Centos-7 32-bit , we will be updated. All other server manipulations are performed remotely, using ansible.
Ansible project structure
In accordance with
accepted practice, we have the following file structure:
$ find -type f ./roles/update_os/tasks/main.yml ./roles/update_nginx_configs/tasks/main.yml ./roles/update_nginx_configs/files/proxy.conf ./roles/update_nginx_configs/templates/domain.conf.j2 ./roles/update_nginx_configs/handlers/main.yml ./roles/update_hostname/tasks/main.yml ./ansible.cfg ./hosts ./proxy-nodes.yml
Go through all the files.
./hosts [test] localhost ansible_connection=local [centos:children] proxy-nodes [proxy-nodes] xxx.xxx.xxx.xxx ansible_connection=ssh ansible_ssh_user=root ansible_ssh_pass=xxxxxx node_hostname=proxy-node-001.www.co yyy.yyy.yyy.yyy ansible_connection=ssh ansible_ssh_user=root ansible_ssh_pass=yyyyyy node_hostname=proxy-node-010.www.co zzz.zzz.zzz.zzz ansible_connection=ssh ansible_ssh_user=root ansible_ssh_pass=zzzzzz node_hostname=proxy-node-029.www.co
Here the meta group [centos] is displayed and the group [proxy-nodes] is separately listed as a child group to [centos].
Structure with the expectation of expanding roles and tasks.
./ansible.cfg [defaults] pipelining = True hostfile = hosts [ssh_connection] ssh_args = -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s control_path = ~/.ansible/cp/ansible-ssh-%%h-%%p-%%r
There is nothing special here either. Setting up the server goes as root, so you can safely turn on pipelining.
hostfile - to clear input parameters when working in the console,
ssh_args - to reduce talkativeness with host reloads, and to set up a persistent connection, among them the most important is ControlPath.
ControlPath - better to see once$ ssh -o ControlMaster = auto -o ControlPersist = 60s -o UserKnownHostsFile = / dev / null -o StrictHostKeyChecking = no -o IdentitiesOnly = yes -o ControlPath = / tmp / habr.socket root@192.168.124.185
Warning: Permanently added '192.168.124.185' (RSA) to the list of known hosts.
root@192.168.124.185's password:
Last login: Thu Mar 10 22:46:41 2016
[root @ test001 ~] # service sshd stop
Stopping sshd: [OK]
[root @ test001 ~] # exit
logout
Shared connection to 192.168.124.185 closed.
$ ssh -o ControlPath = / tmp / habr.socket root@192.168.124.185
Last login: Thu Mar 10 22:48:12 2016 from 192.168.124.1
[root @ test001 ~] # exit
logout
Shared connection to 192.168.124.185 closed.
$ ssh root@192.168.124.185
ssh: connect to host 192.168.124.185 port 22: Connection refused
$ ssh -o ControlPath = / tmp / habr.socket root@192.168.124.185
Last login: Thu Mar 10 22:48:47 2016 from 192.168.124.1
[root @ test001 ~] # service sshd start
Starting sshd: [OK]
[root @ test001 ~] # exit
logout
Shared connection to 192.168.124.185 closed.
$ ssh root@192.168.124.185
Warning: Permanently added '192.168.124.185' (RSA) to the list of known hosts.
root@192.168.124.185's password:
This allows you to work faster than with the "accelerate: true" option. Despite the
documentation , Centos 6 has
sooooo been working with ControlPersist for a
long time , and does not require such a preinstal, as was done before:
example ./prepare-accelerate.yml to prepare the node for the accelerate option: true in the playbook ./proxy-nodes.yml --- - hosts: centos tasks: - name: install EPEL yum: name=epel-release - name: install keyczar yum: name=python-keyczar
Next, the standard playbook, when working with roles, and task update_os:
./proxy-nodes.yml --- - hosts: proxy-nodes roles: - update_hostname - update_os - update_nginx_configs
./roles/update_os/tasks/main.yml --- - name: repo install EPEL yum: name=epel-release - name: repo install nginx-release-centos-6 yum: state=present name=http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm - name: packages install some yum: name={{ item }} with_items: - nginx - yum-update - name: packages upgrade all yum: name=* state=latest
I believe that these files do not need commenting.
Role update_hostname
It so happened that the nodes are somehow named. As you can see, in the hosts file is specified, speaking for itself, the parameter node_hostname. Unfortunately, ansible still can not change the hostname in accordance with the FQDN, so you have to help:
./roles/update_hostname/tasks/main.yml --- - name: set hostname hostname: name={{ node_hostname }} - name: add hostname to /etc/hosts lineinfile: dest=/etc/hosts regexp='.*{{ node_hostname }}$' line="{{ ansible_default_ipv4.address }} {{ node_hostname }}" state=present create=yes when: ansible_default_ipv4.address is defined
Now hostname -f does not swear, and it is this check that exists in some control panels.
Role update_nginx_configs
The last role is update_nginx_configs. Here we describe the handler for the nginx release:
./roles/update_nginx_configs/handlers/main.yml --- - name: reload nginx service: name=nginx state=reloaded
The following file creates a caching zone in the http section, and includes future domains for proxying:
./roles/update_nginx_configs/files/proxy.conf proxy_cache_path /tmp levels=1:2 keys_zone=PROXY:10m inactive=24h max_size=4g use_temp_path=off; include /etc/nginx/conf.d/proxy/*.conf;
Template for domains like this:
./roles/update_nginx_configs/templates/domain.conf.j2 server { listen {{ ansible_default_ipv4.address }}:80; server_name {{ item.domain }} www.{{ item.domain }}; access_log /var/log/nginx/{{ item.domain }}.access.log main ; error_log /var/log/nginx/{{ item.domain }}.error.log; location / { proxy_pass http://{{ item.remoteip }}:80/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; proxy_connect_timeout 90; proxy_cache PROXY; proxy_cache_valid 200 302 1d; proxy_cache_valid 404 30m; proxy_cache_valid any 1m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; } }
There is nothing special here, proxying and cache options are selected for the project. Among the dynamic parameters, we see only three: ansible_default_ipv4.address, item.domain, and item.remoteip. Where the last two come from is evident from the following file:
./roles/update_nginx_configs/handlers/main.yml --- - name: create non existing dir /etc/nginx/conf.d/proxy/ file: path=/etc/nginx/conf.d/proxy/ state=directory mode=0755 - copy: src=proxy.conf dest=/etc/nginx/conf.d/ owner=nginx group=nginx backup=yes - name: re-create domain templates template: src=domain.conf.j2 dest=/etc/nginx/conf.d/proxy/{{ item.domain }}.conf owner=nginx group=nginx backup=yes with_items: - { domain: 'nginx.org' , remoteip: '206.251.255.63' } - { domain: 'docs.ansible.com', remoteip: '104.25.170.30' } notify: reload nginx - name: validate nginx conf command: nginx -t changed_when: false
Here are the final steps: checked that the directory for domains exists, updated the config with the caching zone settings, went through all domain-remoteip pairs in the loop with withitems and re-created the configs.
The final step is the validation of the config, and if successful, the reload nginx handler will start. Unfortunately, this validation cannot be used when generating a template or copying the proxy.conf config.
The validate = “nginx -t -c% s” options or even validate = “nginx -t -c /etc/nginx/nginx.conf -p% s” options are not as good as in the case of generating the httpd.conf config.
Go!
In the case of updating or changing the list of domains in the “re-create domain templates” task, execute:
ansible-playbook proxy-nodes.yml
without any additional options. After adding a new node, you need to run the command:
ansible-playbook proxy-nodes.yml --limit = bbb.bbb.bbb.bbb
where to specify the IP of the new node.
Conclusion
Asking google, I did not get an answer about such services from hosting providers. But the target audience can be very different, from the CEO to various adult web-masters.
Therefore, below the survey.