📜 ⬆️ ⬇️

Performance Analysis of WSGI Servers: Part Two

This article is a translation of Kevin Goldberg’s “A Performance Analysis of Python WSGI Servers: Part 2” dzone.com/articles/a-performance-analysis-of-python-wsgi-servers-part with minor additions from the translator.

image

Introduction


In the first part of this series, you met WSGI and the six most popular servers in the opinion of the author of WSGI . In this section, you will be shown the result of analyzing the performance of these servers. For this purpose, a special test sandbox was created.

Contestants


Due to lack of time, the study was limited to six WSGI servers. All code with start-up instructions for this project is posted on GitHub . Perhaps over time, the project will expand and performance analyzes will be presented for other WSGI servers. But while we are talking about six servers:
')
  1. Bjoern describes itself as a “super fast WSGI server” and boasts that it is “the fastest, smallest and easiest WSGI server”. We have created a small application that uses most of the library's default settings.
  2. CherryPy is an extremely popular and stable framework and WSGI server. This small script was used to serve our sample application via CherryPy .
  3. Gunicorn was inspired by Ruby's Unicorn server (hence the name). He modestly asserts that he is "just implemented, easy to use and quite fast." Unlike Bjoern and CherryPy , Gunicorn is a standalone server. We created it using this command . The WORKER_COUNT parameter was set to twice the number of available processor cores, plus one. This was done based on recommendations from the Gunicorn documentation .
  4. Meinheld is a high-performance WSGI-compatible web server that claims to be lightweight. Based on the example specified on the server site, we created our application .
  5. mod_wsgi is created by the same creator as mod_python . Like mod_python , it is available only for Apache. However, it includes a tool called mod_wsgi express , which creates the smallest possible Apache instance. We configured and used mod_wsgi express with this command . To comply with Gunicorn , we configured mod_wsgi so that we can create workers twice as large as the processor cores.
  6. uWSGI is a full-featured application server. As a rule, uWSGI interfaces with a proxy server (for example: Nginx). However, in order to better evaluate the performance of each server, we tried to use only bare servers and created two workers for each available processor core.

Benchmark


To make the test as objective as possible, a Docker container was created to isolate the server being tested from the rest of the system. Also, the use of the Docker container ensured that each launch starts from a clean slate.

Server:



Testing:



Metrics:



results


All initial performance indicators were included in the project repository , and a consolidated CSV file was also provided. Also for visualization were created graphics in the environment of Google-doc .

RPS dependence on the number of simultaneous connections


This graph shows the average number of simultaneous requests; The higher the number, the better.

image

image


WINNER: Bjoern

Bjoern


By the number of persistent requests, Bjoern is an obvious winner. However, given that the numbers are much higher than those of competitors, we are a bit skeptical. We are not sure that Bjoern is really so stunningly fast. At first we tested the servers alphabetically, and we thought that Bjoern had an unfair advantage. However, even after starting the servers in random order of the server and re-testing the result remained the same.

uWSGI


We were disappointed with the weak uWSGI results. We expected him to be in the lead. During testing, we noticed that the uWSGI logs were typing on the screen, and we initially explained the lack of performance by the extra work that the server was doing. However, even after the “ --disable-logging ” option is added, uWSGI is still the slowest server.

As mentioned in the uWSGI manual, it usually interfaces with a proxy server, such as Nginx. However, we are not sure that this can explain such a big difference.

Delay


The delay is the amount of time elapsed between the request and its response. Lower numbers are better.

image


WINNER: CherryPy

RAM usage


This metric shows the memory requirements and the “lightness” of each server. Lower numbers are better.

image


WINNERS: Bjoern and Meinheld

Number of mistakes


An error occurs when the server crashes, interrupts, or expires. The lower the better.

image

For each server, we calculated the ratio of the total ratio of requests to errors:


WINNER: CherryPy

CPU use


High CPU utilization is not good or bad if the server is working well. However, this gives some interesting information about the server. Since two CPU cores were used, the maximum possible use is 200 percent.

image


WINNER: No, because this is more likely an observation of behavior than a comparison of performance.

Conclusion


Summarize! Here are some general ideas that can be gleaned from the results of each server:

Source: https://habr.com/ru/post/427217/


All Articles