📜 ⬆️ ⬇️

Dedok recommends or compare various ways of deploying Django applications

image More and more of our clients use the wonderful Django web framework in their projects and it is not surprising. After all, this framework allows you to very quickly create dynamic sites and at the same time has great flexibility. He has in his arsenal a lot of ready-made solutions for almost all occasions, and in fact, is a low-level site builder. And its main advantage is flexibility, thanks to which, you can create absolutely any complexity of a web application in a short time.

This framework has a convenient built-in web server where you can easily debug your application, but naturally it is not suitable for real combat use.

Recently, we have received many requests from clients regarding its deployment to a working server. In this regard, we decided to conduct a small test of several of the most popular bundles for performance and ease of use, to recommend our customers the best option.
For tests, we took a typical Dedicated Server .

Some server information:
- Linux alex.tests # 1 SMP Wed Apr 7 10:36:28 MSD 2010 i686 GNU / Linux
Debian GNU / Linux 5.0 \ n \ l
Intel® Core (TM) 2 Duo CPU E8400 @ 3.00GHz
cpu cores: 2
alex: ~ # free -m
total used free shared buffers cached
Mem: 3291 224 3067 0 10 187
- / + buffers / cache: 26 3264
Swap: 2055 0 2055

- In this test, there were three bundles:
1) apache2 + mod_wsgi
This is the most common and recommended solution to date. This module can use two modes of operation: an embedded mode (where Apache works in the same way as mod_python) and a so-called daemon mode, similar in the way FastCGI / SCGI works.

2) nginx + flup
Flup is essentially a collection of different WSGI modules. It is the recommended method for running django applications in FastCGI mode.

3) nginx + superfcgi
According to the author of the project, this is “The only one true way to run WSGI apps through fastcgi”. Its author is barbuza . This solution is not as common as the rest.

In order to eliminate artificial operating system limits on file descriptors, we set the following parameters on the server:
ulimit -n 10240 (maximum size of files written by this shell)
sysctl -w net.core.somaxconn = 150000 (maximum number of connections)
sysctl -w fs.file-max = 100000 (number of open files)

As a client, the usual VDS on XEN was used, located on the server in the same rack. The load was created using httperf . This utility is specifically designed for this kind of research, it artificially creates a load on the remote server and has many settings.
The test was a simple Django application, which, according to a certain url, caused a specific view and loaded a small html template. Thus, the core components of the framework participated in the test. At this URL was carried out on 20,000 requests with increasing speed.

In order to eliminate all magic, after each bundle the operating system on the server was reinstalled again, and the client rebooted.

We were most interested in the values ​​of the reply rate (response rate) and the response time (average response time). Was also interesting memory consumption.

As a result, the results were as follows:


The absolute leader is the nginx + superfcgi bundle, which, along with the ease of installation, makes it an optimal production option. And that is what we can recommend to our customers. It is interesting to note that even with 6000 requests in the case of this bundle (unlike the others), there was no tendency towards a decrease in the time and number of responses. As for the maximum memory consumption, it was 318Mb for Apache and about 290Mb in the case of nginx.

And now an interesting offer: to all habr readers who have bought a dedicated server from us, we can not only install and configure this bundle for free (write a ticket and ask to send it to the Scripting Guru department), but also give a 30% discount for the first month of the server rental . In order to activate the discount, you must specify the promotional code: habradedic when ordering.

Source: https://habr.com/ru/post/102678/

All Articles