Good time of day, dear readers!
Below is a fascinating (?) Story about how our organization solved the so-called problem. "Deployment as in humans." Our main language is Python, with a mix of different interesting (and not so) packages (Django, Bottle, Flask, PIL, ZMQ, etc.).
Let's start with a brief description of one of our applications:
')
- Django 1.4
- Mysql
- Celery for cron-imitation and support of auxiliary functions in the background
- Daemon process based on Django management command
The whole thing works under a bunch of gUnicorn and nginx, on CentOS 5.8 OS.
Details, as is customary, below.
The essence of the problem
In one of the final phases of the project, we got the idea that, in principle, “svn up && python manage.py syncdb && python manage.py migrate” is crooked; began the search for a "more optimal" approach.
Option One - "Snake in Space"
So, as we use
Spacewalk for server management, the idea was to pack our application into an RPM package; the ability to install "one click" manila and option was adopted in the development.
About 8 hours later, when the WTF / hr indicator was stuck in the red zone, we decided to look for something simpler. Main reasons:
- All dependencies must be packaged in RPM
- Not all packages / distributions like this.
- The packaging process leads to the fork of the package, since Often you need to edit setup.py for proper conversion to a .spec file.
- The packaging environment itself requires relatively much experience.
Option Two - "Snake in the Shop"
The second option arose after the words “how are we putting other people's packages?” And the search for a local copy of PyPi began. Of the many abandoned and hopeless packages, the localshop was chosen, which pleased us with its simplicity of installation and did not even ask for the strange packages of
THIS version.
It took us a relatively short amount of time to fit our application - all we needed was to add setup.py and specify the “left” packages right there, although we still did a rake:
It was necessary to explicitly specify
zip_safe = False and
include_package_data = True , since Some files were not unpacked during installation.
Apache (which will be replaced soon) had to specify
KeepAliveTimeout 300 ,
SetEnv proxy-sendcl and
ProxyTimeout 1800 to download large packages.
In addition, the process of configuring the localshop went well, it was enough to run (under the “clean” account):
cd && virtualenv venv && . venv/bin/activate && pip install localshop
~/venv/bin/localshop init && ~/venv/bin/localshop upgrade
After that, we only need to fit ~ / .pipyrc under our "store":
[distutils]
index-servers = local
[local]
username: developer
password: parolcheg123
repository: http://cheese.example.com/simple/
After that, the release process is reduced to
python setup.py upload -r local
, after changing the version number in
setup.py .
Final Approach
At the first attempt to install our application on the “combat” server, we suffered a sad fate - we needed the PIL package, and GCC and various things like libpng-devel were not available as a class. I still had to “assemble the RPM-package of Python and various interesting pieces (MySQL-python, setuptools, PIL, ZMQ) with handles” and fill it with
Spacewalk .
After this cognitive operation (which, in fact, relies on its own post), the installation of the application itself is over and we began to “finish off” the process, modifying minor problems:
- Auto-launch gUnicorn (for localshop in general and our applications specifically): there was enough fine file handling of the dug-out script .
- “Correct” pip setup on “combat” servers: add [index] url = cheese.example.com/simple to the [global] section of the (new) ~ produser / .pip / pip.conf file
- Adding all this stuff to the monitoring system (OpsView): adding a new service check to the process with “run_gunicorn” or “gunicorn” arguments and linking to the server.
In addition, we needed to launch a “long-playing” process, as well as
Celery . For our application, it was decided to use ZDaemon, for which init-script and the configuration file were written:
#! / bin / bash
### BEGIN INIT INFO
# Provides: our_app
# Required-Start: $ all
# Required-Stop: $ all
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: controls our_app via zdaemon
# Description: controls our_app via zdaemon
### END INIT INFO
. /etc/rc.d/init.d/functions
. / etc / sysconfig / network
. ~ produser / venv / bin / activate
# Check that networking is up.
["$ NETWORKING" = "no"] && exit 0
RETVAL = 0
APP_PATH = ~ / produser / app /
PYTHON = ~ produser / venv / bin / python
USER = produser
start () {
cd $ APP_PATH
zdaemon -C our_app.zdconf start
zdaemon -C our_app_celery.zdconf start
}
stop () {
cd $ APP_PATH
zdaemon -C our_app.zdconf stop
zdaemon -C our_app_celery.zdconf stop
}
check_status () {
cd $ APP_PATH
zdaemon -C our_app.zdconf status
zdaemon -C our_app_celery.zdconf status
}
case "$ 1" in
start)
start
;;
stop)
stop
;;
status)
check_status
;;
restart)
stop
start
;;
*)
esac
exit 0
# our_app [_celery] .zdconf
<runner>
daemon true
directory / opt / produser / app /
forever false
backoff-limit 10
user produser
# run_command -> actual command or celeryd
program / opt / produser / venv / bin / python /opt/produser/app/manage.py run_command --settings = prod_settings
socket-name /tmp/our_app.zdsock
</ runner>
Final result
Running ~ / venv / bin / activate && pip install -U our_app under the produser account installs the latest version of our application almost without any problems + all the “left” packages specified in setup.py.
The syncdb and migrate process is still done by handles, but:
- The "combat" version is always known.
- No need to install GCC, etc. on the "combat" server
- Rollback is simple
I hope that this description of the process enlightened some of the readers.