📜 ⬆️ ⬇️

Customize graphite + virtualenv + collectd

In this article, I would like to share my experience in customizing the settings of the collectd statistics collection and visualization service in conjunction with graphite . The first is used as a data collector, the second - as a repository with a visualizer.

Motivation


Until recently, I used Munin to collect and display statistics, but his graphs always annoyed me (appearance) I don’t know why. In addition, after the update, he constantly fell off something and it tired me out. In connection with all this, I decided to look for an alternative and came across collectd. In general, it seemed to me a worthy alternative, but all the web visualizers that I looked for it seemed to me a little miserable and I already wanted to put an end to this undertaking. And here, I remembered that recently at work we set up a graphite. I decided to try what comes of it.

purpose


Configure the graphite so that it works using supervisord, uwsgi and virtualenv and is visible from the outside (nginx). Collectd, in this case, must give the graphite data directly.

Graphite

First you need to create a directory where the graphite will live and deploy a virtual environment there.
$ mkdir /var/projects/graphite $ cd /var/projects/graphite $ virtualenv --no-site-packages .env $ virtualenv --relocatable .env $ source .env/bin/activate 

Graphite uses cairo for working with images, so it is necessary that the library be installed on the system. In my case, everything is already there, so I will only describe how to install pycario in a virtual environment.
')
Download and install py2cairo

 $ wget http://cairographics.org/releases/py2cairo-1.10.0.tar.bz2 $ tar -jxf py2cairo-1.10.0.tar.bz2 && cd py2cairo-1.8.10 $ ./waf configure --prefix=$VIRTUAL_ENV $ ./waf build $ ./waf install $ cd .. && rm -R py2cairo-1.8.10 && rm py2cairo-1.10.0.tar.bz2 

We put the whisper

 $ wget https://launchpad.net/graphite/0.9/0.9.9/+download/whisper-0.9.9.tar.gz $ tar -xzpf whisper-0.9.9.tar.gz && cd whisper-0.9.9 $ python setup.py install $ cd .. && rm -R whisper-0.9.9 && rm whisper-0.9.9.tar.gz 

We put carbon

 $ wget https://launchpad.net/graphite/0.9/0.9.9/+download/carbon-0.9.9.tar.gz $ tar -xzpf carbon-0.9.9.tar.gz && cd carbon-0.9.9 

In order for everything to be installed in our sandbox, in the setup.cfg file we change:
prefix = /opt/graphite
on:
prefix = $VIRTUAL_ENV/..
Then:
 $ python setup.py install $ cd .. && rm -R carbon-0.9.9 && rm carbon-0.9.9.tar.gz 

We put graphite (webapp)

 $ wget https://launchpad.net/graphite/0.9/0.9.9/+download/graphite-web-0.9.9.tar.gz $ tar -xzpf graphite-web-0.9.9.tar.gz && cd graphite-web-0.9.9 

With the help of check-dependencies.py, we check what needs to be installed, in my case it is:
 $ pip install django django-tagging twisted python-memcached psycopg2 egenix-mx-base 

The latter, in principle, at will, but I thought since I still have it, then let it be.
By analogy with carbon, in the file setup.cfg we change:
prefix = /opt/graphite
on:
prefix = $VIRTUAL_ENV/..
Then:
 $ python setup.py install $ cd .. && rm -R graphite-web-0.9.9 && rm graphite-web-0.9.9.tar.gz 

Customize carbon and graphite

The configuration file for supervisord will look something like this:
 [program:graphite_uwsgi] command=/usr/bin/uwsgi --pidfile /var/projects/graphite/run/graphite_uwsgi.pid -x /var/projects/graphite/conf/uwsgi.conf --vacuum directory=/var/projects/graphite/webapp/ autostart=true autorestart=true startsecs=5 startretries=3 stopsignal=TERM stopwaitsecs=15 stopretries=1 stopsignal=QUIT redirect_stderr=false stdout_logfile=/var/projects/graphite/storage/log/graphite_uwsgi.log stdout_logfile_maxbytes=1MB stdout_logfile_backups=10 stdout_capture_maxbytes=1MB stderr_logfile=/var/projects/graphite/storage/log/graphite_uwsgi-error.log stderr_logfile_maxbytes=1MB stderr_logfile_backups=10 stderr_capture_maxbytes=1MB [program:carbon] command=/var/projects/graphite/.env/bin/python /var/projects/graphite/bin/carbon-cache.py --debug start priority=1 autostart=true autorestart=true startsecs=3 redirect_stderr=false stdout_logfile=/var/projects/graphite/storage/log/carbon.log stdout_logfile_maxbytes=1MB stdout_logfile_backups=10 stdout_capture_maxbytes=1MB stderr_logfile=/var/projects/graphite/storage/log/carbon-error.log stderr_logfile_maxbytes=1MB stderr_logfile_backups=10 stderr_capture_maxbytes=1MB 

It specifies how to run the graphite backend and and the carbon server. In my case, I put this file in the conf directory of the sandbox and link to it in the directory where supervisord will find it.
Next, I create a config for uwsgi, which looks like this:
 <uwsgi> <socket>127.0.0.1:8001</socket> <processes>2</processes> <home>/var/projects/graphite/.env</home> <pythonpath>/var/projects/graphite/webapp/</pythonpath> <chdir>/var/projects/graphite/webapp</chdir> <max-requests>2000</max-requests> <touch-reload>/var/projects/graphite/uwsgi.reload</touch-reload> <harakiri>120</harakiri> <post-buffering>8192</post-buffering> <post-buffering-bufsize>65536</post-buffering-bufsize> <master/> <single-interpreter/> <env>DJANGO_SETTINGS_MODULE=graphite.settings</env> <module>wsgi</module> </uwsgi> 

The module directive points to the Python module to be launched. As a basis, I took the graphite.wsgi.example which is right there, corrected it and put it in the / var / projects / graphite / webapp directory with the name wsgi.py. After edits, it began to look like this:
 import os import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() # READ THIS # Initializing the search index can be very expensive, please include # the WSGIScriptImport directive pointing to this script in your vhost # config to ensure the index is preloaded before any requests are handed # to the process. from graphite.logger import log log.info("graphite.wsgi - pid %d - reloading search index" % os.getpid()) import graphite.metrics.search 

Next, in the conf directory, rename the file pair:
 $ mv carbon.conf.example carbon.conf $ mv storage-schemas.conf.example storage-schemas.conf 

In addition, in the webapp / graphite directory you need to rename the settings file:
 $ mv local_settings.py.example local_settings.py 

Also, it is recommended to make the necessary changes to this file, for example, I entered the settings for accessing posgresql there.
After editing local_settings to initialize the graphite database, follow the instructions:
 $ python webapp/graphite/manage.py syncdb 

Nginx for graphite

I made a file like this:
 server { access_log /var/projects/graphite/storage/log/nginx.access.log main; error_log /var/projects/graphite/storage/log/nginx.error.log info; listen 80; server_name graphite.some_site.com; root /var/projects/graphite/webapp/; location / { include /etc/nginx/uwsgi_params; uwsgi_pass 127.0.0.1:8001; } location /content/ { access_log off; expires 30d; } } 

He saved it in the conf directory of the sandbox, next to the above-described files for supervisord and uwsgi, called it nginx.conf and made a symlink to it in the virtual hosts directory nginx.

Launch

Before restarting supervisord and nginx, you need to correct the access rights to the sandbox directory so that the files and directories are readable to the user from which nginx and supervisord starts. In addition to this, run, storage directories should be allowed to write to these users. In my case, both supervisor and nginx belong to the www group, so I do the following:
 $ mkdir /var/projects/graphite/run $ chown myuser:www -R /var/projects/graphite $ cd /var/projects/graphite $ chmod -R 770 storage run 

Everything, we restart supervisord and nginx. By logs in storage, we see that everything is fine, and in the browser at the given address we see graphite.

Collectd

Here everything is much simpler :-) First you need to install the collectd itself with a set of necessary plug-ins into the system. I will not describe this process, because everyone is aware of how to install applications in one or another distribution kit.
Since the service does not require data storage, I did not install rrd *. Instead of rrftool, graphite will be used. In order for the collected data to be transferred where it should, you need to install a plugin that will do this. I opted for collectd-carbon , which is written in python. At first I tried to use a plugin written in C ( collectd-write_graphite ), it even started right away, but some strange names of metrics were sent out and I refused it.
Configuring the plugin is simple and straightforward and can be viewed on the githab by clicking the link above.

Total

On this perhaps and finish, because the goal is achieved. I use this link to view statistics on the home server and in the near future I plan to add sending data from several external machines here. Thus, graphite acts as an aggregator of statistics.
Thank you for your attention, I hope that the post will be useful to someone else. If it turned out a bit long (and maybe not quite smooth in some places), then I apologize.

Links

http://graphite.readthedocs.org/en/1.0/index.html
http://www.frlinux.eu/?p=199
http://graphite.wikidot.com/
http://mldav.blogspot.com/2011/10/debian-graphite.html
http://collectd.org/

PS

About the "body movements". This is a stone towards the developers of graphite, which, for some reason, did not bother to make, for example, 1 package instead of 3 + a small script that will set up the environment.

Source: https://habr.com/ru/post/139386/


All Articles