📜 ⬆️ ⬇️

Deploy applications on RoR 4 using Capistrano 3


Imagine: You are a web developer who just recently mastered Ruby on Rails. And then your first project comes to the stage when it needs to be posted on the Internet.
Of course, you can pour it on Heroku, but local prices bite a little. It remains only to buy a VPS, configure it and put the project there.
“What could be easier? I’ll find some guide, but I’ll follow everything on it ”- you will think. Here are only guides that not only lay out commands, but also explaining what these commands do - units, and they use the outdated second version of Capistrano.

Therefore, I decided to write my guide in which I will try to examine in detail:


I hope that this guide will be useful not only for beginners, but also for developers with experience.
')


Initial server configuration



You bought your first VPS, installed the OS (I use ubuntu 12.04 LTS and I’ll upload all the commands for it), logged in via SSH. What to do next?

First, change the password for the root user by the command
passwd 

Create a new user:
 adduser deployer 

Let him use the sudo command:
 visudo 
and append:
 deployer ALL=(ALL:ALL) ALL 

Let's change the ssh server settings (we will disable login as root, access by domain name and allow login only under our new user). Add to the file '/ etc / ssh / sshd_config':
 PermitRootLogin no UseDNS no AllowUsers deployer 

Restart the ssh server with the command:
 reload ssh 


In order not to enter the password every time you connect via ssh, we need to copy the ssh key from your machine to the server. The easiest way to do this is to run it on a local machine.
 ssh-copy-id deployer@123.123.123.123 
(On a Mac, you need to install ssh-copy-id, you can do it through brew, on Windows I don’t know an automated tool for copying keys, but there are many interesting things on the Internet).

Also, as long as we are under the root, you can create a SWAP file if you have a little RAM. This is done like this:
 dd if=/dev/zero of=/swapfile bs=1024 count=512k mkswap /swapfile swapon /swapfile 

Next in the file '/ etc / fstab' add the line:
  /swapfile none swap sw 0 0 

And then we perform:
 echo 0 > /proc/sys/vm/swappiness sudo chown root:root /swapfile sudo chmod 0600 /swapfile 

You can reboot and check for the presence of the SWAP file with the command
 swapon -s 


Install and configure nginx



This time we are logged in with our new user command.
 ssh deployer@123.123.123.123 
(on the local computer).
Personally, I use the PageSpeed ​​module, so I compile nginx myself. But first we need to update the repositories, update the system, and download the packages necessary for successful assembly:
 sudo apt-get update sudo apt-get dist-upgrade sudo apt-get install build-essential zlib1g-dev libpcre3 libpcre3-dev unzip 

Now we collect:
 wget https://github.com/pagespeed/ngx_pagespeed/archive/v1.7.30.1-beta.zip unzip v1.7.30.1-beta.zip cd ngx_pagespeed-1.7.30.1-beta wget https://dl.google.com/dl/page-speed/psol/1.7.30.1.tar.gz tar -xzvf 1.7.30.1.tar.gz wget http://nginx.org/download/nginx-1.4.4.tar.gz tar -xzvf nginx-1.4.4.tar.gz cd nginx-1.4.4 ./configure --add-module=$HOME/ngx_pagespeed-1.7.30.1-beta make sudo checkinstall 

To manage nginx we will write an upstart script. Create the file '/etc/init/nginx.conf' with the following contents:
etc / init / nginx.conf
 description "nginx http daemon" author "George Shammas <georgyo@gmail.com>" start on (filesystem and net-device-up IFACE=lo) stop on runlevel [!2345] env DAEMON=/usr/local/nginx/sbin/nginx env PID=/var/run/nginx.pid expect fork respawn respawn limit 10 5 #oom never pre-start script $DAEMON -t if [ $? -ne 0 ] then exit $? fi end script exec $DAEMON 


Now you can perform
 sudo start/stop/restart/status nginx 

Our nginx.conf is at the address '/usr/local/nginx/conf/nginx.conf', but for the time being we will not touch it. We will fill it automatically when the application is first deployed.

For our web applications, we will create a new user and a new group, add ourselves to this group, and create a folder:
 sudo useradd -s /sbin/nologin -r nginx sudo groupadd web sudo usermod -a -G web nginx sudo usermod -a -G web deployer sudo mkdir /var/www sudo chgrp -R web /var/www sudo chmod -R 775 /var/www 

In order for us to write to the folder, you will have to log out and log in again under our user.

Install and configure PostgreSQL



In the ubuntu repositories is an outdated version, so we add a third-party repo. In the file '/etc/apt/sources.list.d/pgdg.list' we add:
 deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main 

Then add the repository key and install PostgreSQL:
 wget --quiet -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add - sudo apt-get update sudo apt-get install postgresql-9.3 postgresql-server-dev-9.3 

And create a new user:
 sudo -u postgres psql create user deployer with password ' '; alter role deployer superuser createrole createdb replication; \q 


To access from the local computer in the file '/etc/postgresql/9.3/main/postgresql.conf' change the parameter listen_addresses = 'localhost' to listen_addresses = '*' and add it to the file '/etc/postgresql/9.3/main/pg_hba .conf 'line
 host all deployer ..ip. 255.255.255.0 md5 


Reboot the postgresql command
 sudo service postgresql restart 


Install and configure Redis



If you use gem resque , then you need to install Redis. For an outdated version in the repository, I compile it from source, and it also takes a little time:
 sudo apt-get install tcl8.5 wget http://download.redis.io/redis-stable.tar.gz tar xvzf redis-stable.tar.gz cd redis-stable make make test sudo cp src/redis-server /usr/local/bin sudo cp src/redis-cli /usr/local/bin 


Redis is not password protected by default and is open to everyone, so we put the password: in the 'redis.conf' file, add the requirepass parameter with our password. Redis is easy to brute force, so I make a password of at least 100 characters. Also, so that later there are no errors, change the dir parameter to /var/www/other , after creating such a folder ( mkdir /var/www/other ).
We copy our config command
 sudo cp redis.conf /etc/redis/redis.conf 

Create an upstart script at '/etc/init/redis-server.conf' with the following content:
/etc/init/redis-server.conf
 #!upstart description "Redis Server" env USER=deployer start on runlevel [2345] stop on runlevel [016] respawn exec start-stop-daemon --start --make-pidfile --pidfile /var/run/redis-server.pid --chuid $USER --exec /usr/local/bin/redis-server /etc/redis/redis.conf >> /var/www/log/redis.log 2>&1 



Now we can manage Redis commands
 sudo start/stop/restart/status redis-server 
by creating a folder for logs ( mkdir /var/www/log ).

Installing RVM, Ruby, Rails, Bundler



There is nothing complicated at all:
 sudo apt-get install git curl python-software-properties sudo add-apt-repository ppa:chris-lea/node.js sudo apt-get update sudo apt-get install nodejs curl -L get.rvm.io | bash -s stable source ~/.rvm/scripts/rvm rvm requirements rvm install 2.0.0 rvm use 2.0.0 --default gem install rails --no-ri --no-rdoc gem install bundler 


Create a repository on GitHub / BitBucket




We will use git on a remote server to deploy our application. You can also configure the git server on our VPS, but why, if there are convenient free solutions. So, we create a repository on GitHub / BitBucket (in BitBucket, private repositories are free), but we are not in a hurry to upload our project there, first edit the .gitignore file (it is in the root of the application) so that no confidential information gets into the repo (this is especially important, if the repo is public), and at the same time we don’t need the extra files there:
 /config/database.yml #     /Procfile #      /config/deploy/ #  Capistrano /shared/ # ,    ,           /public/system/ #   Paperclip 


Now you can make the first commit and launch the project in git.
 git init git remote add origin #  git add -A git commit -m 'first commit' git push -u origin --all 


We also need to add the key of our server to the Github / BitBucket admin panel, this is a prerequisite, since from the repository changes will be uploaded to the server. How to do this can be found in the Help service.

gem foreman


Dr. Foreman from Dr. House

foreman is a heme for managing application processes. On the local machine, it allows you to run all the processes specified in the Procfile at once by one command
 foreman start 
and shows their output.
On the server with a command
 foreman export upstart 
he creates an upstart script for easy application management using the start / stop / restart commands. But more about that later. For now, just install it, create a Procfile in the root of the application, and fill it for local use. I have it looks like this.
 web: rails s job1: bundle exec rake resque:work PIDFILE=./tmp/pids/resque2.pid QUEUES=send_email job2: bundle exec rake resque:work PIDFILE=./tmp/pids/resque2.pid QUEUES=send_email 

We will write the production configuration later when it comes to Capistrano.

Install Unicorn



Unicorn is an advanced HTTP server. Install it by adding
 group :production do gem 'unicorn' end 
in the gemfile. (Do not forget about the bundle install )

In the folder '/ config /' create a file unicon.rb with something like this:
Unicorn.rb
 worker_processes 2 working_directory "/var/www/apps/_/current" # available in 0.94.0+ # listen on both a Unix domain socket and a TCP port, # we use a shorter backlog for quicker failover when busy listen "/var/www/apps/_/socket/.unicorn.sock", :backlog => 64 listen 8080, :tcp_nopush => true # nuke workers after 30 seconds instead of 60 seconds (the default) timeout 30 # feel free to point this anywhere accessible on the filesystem pid "/var/www/apps/_/run/unicorn.pid" # By default, the Unicorn logger will write to stderr. # Additionally, ome applications/frameworks log to stderr or stdout, # so prevent them from going to /dev/null when daemonized here: stderr_path "/var/www/apps/_/log/unicorn.stderr.log" stdout_path "/var/www/apps/_/log/unicorn.stdout.log" # combine Ruby 2.0.0dev or REE with "preload_app true" for memory savings # http://rubyenterpriseedition.com/faq.html#adapt_apps_for_cow preload_app true GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true # Enable this flag to have unicorn test client connections by writing the # beginning of the HTTP headers before calling the application. This # prevents calling the application for connections that have disconnected # while queued. This is only guaranteed to detect clients on the same # host unicorn runs on, and unlikely to detect disconnects even on a # fast LAN. check_client_connection false before_fork do |server, worker| # the following is highly recomended for Rails + "preload_app true" # as there's no need for the master process to hold a connection defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! # The following is only recommended for memory/DB-constrained # installations. It is not needed if your system can house # twice as many worker_processes as you have configured. # # # This allows a new master process to incrementally # # phase out the old master process with SIGTTOU to avoid a # # thundering herd (especially in the "preload_app false" case) # # when doing a transparent upgrade. The last worker spawned # # will then kill off the old master process with a SIGQUIT. old_pid = "#{server.config[:pid]}.oldbin" if old_pid != server.pid begin sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU Process.kill(sig, File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH end end # # Throttle the master from forking too quickly by sleeping. Due # to the implementation of standard Unix signal handlers, this # helps (but does not completely) prevent identical, repeated signals # from being lost when the receiving process is busy. # sleep 1 end after_fork do |server, worker| # per-process listener ports for debugging/admin/migrations # addr = "127.0.0.1:#{9293 + worker.nr}" # server.listen(addr, :tries => -1, :delay => 5, :tcp_nopush => true) # the following is *required* for Rails + "preload_app true", defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection # if preload_app is true, then you may also want to check and # restart any other shared sockets/descriptors such as Memcached, # and Redis. TokyoCabinet file handles are safe to reuse # between any number of forked children (assuming your kernel # correctly implements pread()/pwrite() system calls) end the same worker_processes 2 working_directory "/var/www/apps/_/current" # available in 0.94.0+ # listen on both a Unix domain socket and a TCP port, # we use a shorter backlog for quicker failover when busy listen "/var/www/apps/_/socket/.unicorn.sock", :backlog => 64 listen 8080, :tcp_nopush => true # nuke workers after 30 seconds instead of 60 seconds (the default) timeout 30 # feel free to point this anywhere accessible on the filesystem pid "/var/www/apps/_/run/unicorn.pid" # By default, the Unicorn logger will write to stderr. # Additionally, ome applications/frameworks log to stderr or stdout, # so prevent them from going to /dev/null when daemonized here: stderr_path "/var/www/apps/_/log/unicorn.stderr.log" stdout_path "/var/www/apps/_/log/unicorn.stdout.log" # combine Ruby 2.0.0dev or REE with "preload_app true" for memory savings # http://rubyenterpriseedition.com/faq.html#adapt_apps_for_cow preload_app true GC.respond_to?(:copy_on_write_friendly=) and GC.copy_on_write_friendly = true # Enable this flag to have unicorn test client connections by writing the # beginning of the HTTP headers before calling the application. This # prevents calling the application for connections that have disconnected # while queued. This is only guaranteed to detect clients on the same # host unicorn runs on, and unlikely to detect disconnects even on a # fast LAN. check_client_connection false before_fork do |server, worker| # the following is highly recomended for Rails + "preload_app true" # as there's no need for the master process to hold a connection defined?(ActiveRecord::Base) and ActiveRecord::Base.connection.disconnect! # The following is only recommended for memory/DB-constrained # installations. It is not needed if your system can house # twice as many worker_processes as you have configured. # # # This allows a new master process to incrementally # # phase out the old master process with SIGTTOU to avoid a # # thundering herd (especially in the "preload_app false" case) # # when doing a transparent upgrade. The last worker spawned # # will then kill off the old master process with a SIGQUIT. old_pid = "#{server.config[:pid]}.oldbin" if old_pid != server.pid begin sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU Process.kill(sig, File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH end end # # Throttle the master from forking too quickly by sleeping. Due # to the implementation of standard Unix signal handlers, this # helps (but does not completely) prevent identical, repeated signals # from being lost when the receiving process is busy. # sleep 1 end after_fork do |server, worker| # per-process listener ports for debugging/admin/migrations # addr = "127.0.0.1:#{9293 + worker.nr}" # server.listen(addr, :tries => -1, :delay => 5, :tcp_nopush => true) # the following is *required* for Rails + "preload_app true", defined?(ActiveRecord::Base) and ActiveRecord::Base.establish_connection # if preload_app is true, then you may also want to check and # restart any other shared sockets/descriptors such as Memcached, # and Redis. TokyoCabinet file handles are safe to reuse # between any number of forked children (assuming your kernel # correctly implements pread()/pwrite() system calls) end 


We replace the APPLICATION NAME with your application name, which you will later set in the Capistrano settings.

Capistrano



Capistrano is a very convenient tool for deploying an application, even if it does not seem so at first. Install it with the necessary additions by adding to the Gemfile:
 group :development do gem 'capistrano' gem 'capistrano-rails' gem 'capistrano-bundler' gem 'capistrano-rvm' end 

Run the bundle exec cap install and add it to the Capfile:
 require 'capistrano/deploy' require 'capistrano/rvm' require 'capistrano/bundler' require 'capistrano/rails' 

Already, just by specifying the server address, the repository and the working folder, Capistrano:


But this is not enough for us. We need to implement the following:

The files that are needed only for the first time will be stored in the shared folder (in the project folder on the local machine), for good reason we added it to .gitignore. At first we will create nginx.conf there approximately with such contents:
nginx.conf
 user nginx web; pid /var/run/nginx.pid; error_log /var/www/log/nginx.error.log; events { worker_connections 1024; # increase if you have lots of clients accept_mutex off; # "on" if nginx worker_processes > 1 use epoll; # enable for Linux 2.6+ # use kqueue; # enable for FreeBSD, OSX } http { # nginx will find this file in the config directory set at nginx build time include mime.types; types_hash_max_size 2048; server_names_hash_bucket_size 64; # fallback in case we can't determine a type default_type application/octet-stream; # click tracking! access_log /var/www/log/nginx.access.log combined; # you generally want to serve static files with nginx since neither # Unicorn nor Rainbows! is optimized for it at the moment sendfile on; tcp_nopush on; # off may be better for *some* Comet/long-poll stuff tcp_nodelay off; # on may be better for some Comet/long-poll stuff # we haven't checked to see if Rack::Deflate on the app server is # faster or not than doing compression via nginx. It's easier # to configure it all in one place here for static files and also # to disable gzip for clients who don't get gzip/deflate right. # There are other gzip settings that may be needed used to deal with # bad clients out there, see http://wiki.nginx.org/NginxHttpGzipModule gzip on; gzip_http_version 1.0; gzip_proxied any; gzip_min_length 0; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied expired no-cache no-store private auth; gzip_comp_level 9; gzip_types text/plain text/xml text/css text/comma-separated-values text/javascript application/x-javascript application/atom+xml; # this can be any application server, not just Unicorn/Rainbows! upstream app_server { server unix:/var/www/apps/_/socket/.unicorn.sock fail_timeout=0; } server { # PageSpeed pagespeed on; pagespeed FileCachePath /var/ngx_pagespeed_cache; location ~ "\.pagespeed\.([az]\.)?[az]{2}\.[^.]{10}\.[^.]+" { add_header "" ""; } location ~ "^/ngx_pagespeed_static/" { } location ~ "^/ngx_pagespeed_beacon$" { } location /ngx_pagespeed_statistics { allow 127.0.0.1; allow 5.228.169.73; deny all; } location /ngx_pagespeed_global_statistics { allow 127.0.0.1; allow 5.228.169.73; deny all; } pagespeed MessageBufferSize 100000; location /ngx_pagespeed_message { allow 127.0.0.1; allow 5.228.169.73; deny all; } location /pagespeed_console { allow 127.0.0.1; allow 5.228.169.73; deny all; } charset utf-8; # enable one of the following if you're on Linux or FreeBSD listen 80 default deferred; # for Linux # listen 80 default accept_filter=httpready; # for FreeBSD # If you have IPv6, you'll likely want to have two separate listeners. # One on IPv4 only (the default), and another on IPv6 only instead # of a single dual-stack listener. A dual-stack listener will make # for ugly IPv4 addresses in $remote_addr (eg ":ffff:10.0.0.1" # instead of just "10.0.0.1") and potentially trigger bugs in # some software. # listen [::]:80 ipv6only=on; # deferred or accept_filter recommended client_max_body_size 4G; server_name _; # ~2 seconds is often enough for most folks to parse HTML/CSS and # retrieve needed images/icons/frames, connections are cheap in # nginx so increasing this is generally safe... keepalive_timeout 5; # path for static files root /var/www/apps/_/current/public; # Prefer to serve static files directly from nginx to avoid unnecessary # data copies from the application server. # # try_files directive appeared in in nginx 0.7.27 and has stabilized # over time. Older versions of nginx (eg 0.6.x) requires # "if (!-f $request_filename)" which was less efficient: # http://bogomips.org/unicorn.git/tree/examples/nginx.conf?id=v3.3.1#n127 try_files $uri/index.html $uri.html $uri @app; location ~ ^/(assets)/ { root /var/www/apps/_/current/public; expires max; add_header Cache-Control public; } location @app { # an HTTP header important enough to have its own Wikipedia entry: # http://en.wikipedia.org/wiki/X-Forwarded-For proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # enable this if you forward HTTPS traffic to unicorn, # this helps Rack set the proper URL scheme for doing redirects: # proxy_set_header X-Forwarded-Proto $scheme; # pass the Host: header from the client right along so redirects # can be set properly within the Rack application proxy_set_header Host $http_host; # we don't want nginx trying to do something clever with # redirects, we set the Host: header above already. proxy_redirect off; # set "proxy_buffering off" *only* for Rainbows! when doing # Comet/long-poll/streaming. It's also safe to set if you're using # only serving fast clients with Unicorn + nginx, but not slow # clients. You normally want nginx to buffer responses to slow # clients, even with Rails 3.1 streaming because otherwise a slow # client can become a bottleneck of Unicorn. # # The Rack application may also set "X-Accel-Buffering (yes|no)" # in the response headers do disable/enable buffering on a # per-response basis. # proxy_buffering off; proxy_pass http://app_server; } # Rails error pages error_page 500 502 503 504 /500.html; location = /500.html { root /var/www/apps/_/current/public; } } } 


This is my config for nginx, there you need to replace 'APPLICATION_NAME' with your application name, indicated in the first line of config / deploy.rb (set: application, 'APPLICATION_NAME').

Now we create in the same place (in / shared /) a Procfile file with the following content:
 web: bundle exec unicorn_rails -c /var/www/apps/_/current/config/unicorn.rb -E production job1: bundle exec rake resque:work RAILS_ENV=production PIDFILE=/var/www/apps/_/run/resque1.pid QUEUES=* job2: bundle exec rake resque:work RAILS_ENV=production PIDFILE=/var/www/apps/_/run/resque2.pid QUEUES=* 

This is a config for an application with two resque workers. If you are not using Resque, just leave only the first line.
In the same place we create database.yml with database settings and application.yml, if you use the Figaro gem.

Your Capistrano script will execute some commands on behalf of the superuser on the server. To allow him to do this, run the command
 sudo visudo 
on the server and add the line:
 deployer ALL=NOPASSWD: /usr/sbin/service, /bin/ln, /bin/rm, /bin/mv, /sbin/start, /sbin/stop, /sbin/restart, /sbin/status 


It remains only to configure the Capistrano. In the file 'config / deploy / production' we make changes:
server 'IP ', user: 'deployer', roles: %w{web app db}
In the file 'config / deploy.rb' add on top:
deploy.rb
 set :repo_url, ' ' set :application, '_' application = '_' set :rvm_type, :user set :rvm_ruby_version, '2.0.0-p353' set :deploy_to, '/var/www/apps/_' namespace :foreman do desc 'Start server' task :start do on roles(:all) do sudo "start #{application}" end end desc 'Stop server' task :stop do on roles(:all) do sudo "stop #{application}" end end desc 'Restart server' task :restart do on roles(:all) do sudo "restart #{application}" end end desc 'Server status' task :status do on roles(:all) do execute "initctl list | grep #{application}" end end end namespace :git do desc 'Deploy' task :deploy do ask(:message, "Commit message?") run_locally do execute "git add -A" execute "git commit -m '#{fetch(:message)}'" execute "git push" end end end 


What does all of this mean? The first lines - config. Then we describe the tasks. There is a task foreman, which has 4 actions: start, stop, restart status. When you run 'cap production foreman: start' on the local machine on the server, it will execute 'sudo start APPLICATION_NAME', but so far this will not give us anything, because foreman has not yet created upstart scripts. Go ahead: there is a git task that has a deploy action. When executing 'cap production git: deploy', the user will be asked to comment on the commit and will be executed:
 git add -A git commit -m '' git push 

Not difficult at all, right? But we will not use these commands ourselves, they will be executed when executing other scripts. Now inside the 'namespace: deploy do' add
deploy.rb
  desc 'Setup' task :setup do on roles(:all) do execute "mkdir #{shared_path}/config/" execute "mkdir /var/www/apps/#{application}/run/" execute "mkdir /var/www/apps/#{application}/log/" execute "mkdir /var/www/apps/#{application}/socket/" execute "mkdir #{shared_path}/system" sudo "ln -s /var/log/upstart /var/www/log/upstart" upload!('shared/database.yml', "#{shared_path}/config/database.yml") upload!('shared/Procfile', "#{shared_path}/Procfile") upload!('shared/nginx.conf', "#{shared_path}/nginx.conf") sudo 'stop nginx' sudo "rm -f /usr/local/nginx/conf/nginx.conf" sudo "ln -s #{shared_path}/nginx.conf /usr/local/nginx/conf/nginx.conf" sudo 'start nginx' within release_path do with rails_env: fetch(:rails_env) do execute :rake, "db:create" end end end end desc 'Create symlink' task :symlink do on roles(:all) do execute "ln -s #{shared_path}/config/database.yml #{release_path}/config/database.yml" execute "ln -s #{shared_path}/Procfile #{release_path}/Procfile" execute "ln -s #{shared_path}/system #{release_path}/public/system" end end desc 'Foreman init' task :foreman_init do on roles(:all) do foreman_temp = "/var/www/tmp/foreman" execute "mkdir -p #{foreman_temp}" #   current  ,  foreman  upstart     execute "ln -s #{release_path} #{current_path}" within current_path do execute "cd #{current_path}" execute :bundle, "exec foreman export upstart #{foreman_temp} -a #{application} -u deployer -l /var/www/apps/#{application}/log -d #{current_path}" end sudo "mv #{foreman_temp}/* /etc/init/" sudo "rm -r #{foreman_temp}" end end desc 'Restart application' task :restart do on roles(:app), in: :sequence, wait: 5 do sudo "restart #{application}" end end after :finishing, 'deploy:cleanup' after :finishing, 'deploy:restart' after :updating, 'deploy:symlink' after :setup, 'deploy:foreman_init' after :foreman_init, 'foreman:start' before :foreman_init, 'rvm:hook' before :setup, 'deploy:starting' before :setup, 'deploy:updating' before :setup, 'bundler:install' 



There is a deploy task and we added 4 new actions: setup (primary setup), foreman_init (creating an upstart script for the application), symlink (creating symbolic links) and restart (restarting the application). We also indicate after / before what stages what needs to be done.

deploy: setup performs the initial server configuration: loads files from the shared folder on the local computer to the shared folder on the server, configures nginx, creates the necessary folders and starts the deploy: foreman_init, which in turn creates upstart scripts through foreman and copies them to / etc / init, after which we can control our application with the sudo start/stop/restart/status _ . Before deploy: setup, the first three steps of the normal deployment of the application are performed, namely, files are uploaded to the server and bundle install is performed. After each deployment, new symlinks are created and Unicorn reboots. It remains only to add in the end of this file before :deploy, 'git:deploy' and now new changes will be automatically committed before each deployment.

Again:



That's exactly the way I always deploy my applications on a VPS. Of course, this method is not the ultimate truth, but I tried to explain with an example how Capistrano works, so that even a beginner would not have problems changing the script to fit his needs. I also do not claim that my nginx.conf and unicorn.rb are perfect, but for almost a year everything has been working with me for slightly powerful VPS and there were no problems even under load.

Source: https://habr.com/ru/post/213269/


All Articles