⬆️ ⬇️

Battle of balancers

“Battle of balancers” is a load test of balancers / proxies that support WebSockets. These technologies are indispensable when scaling infrastructure.



The following technologies were tested:



- http-proxy , version: 0.10.0

- HAProxy , version: 1.5-dev18 (development release)

- elementary “echo server”, for the control test.

')

There were doubts about the hipache . The reason why it was excluded is simple - it is built on the basis of http-proxy . At the moment they are using a fork project, in which there are simply no patches related to performance.



For testing, 3 different unrelated servers were used, all hosted on a joyent .



1. Proxy, 512Mb, Ubuntu server. All proxy servers were installed on this server. image: sdc: jpc: ubuntu-12.04: 2.4.0

2. WebSocket-server, 512MB “smart machine” with Node.js, on which our WebSocket echo server was spinning. The server is written on Node.js and started on several cores using the cluster module. image: sdc: sdc: nodejs: 1.4.0

3. Thor, 512Mb, another “smart machine” on Node.js with specifications similar to the previous one. From this server we generated the necessary load. Thor is a load generation tool developed by us. This application is open source and is available at http://github.com/observing/thor .



Proxy configuration



Our proxy server was a “clean” server with Ubuntu 12.04. To configure and install all dependencies, the following steps have been done. To make sure that we are working with the latest versions, run:



apt-get upgrade 


The system has the following dependencies:

- git to access github repositories

- build-essential for compiling proxies from source, most proxies only recently got support for WebSockets or HTTPS

- libssl-dev is needed to support HTTPS

- libev-dev is required for stud, which is just incredible



 apt-get install git build-essential libssl-dev libev-dev 


Node.js
Node.js is needed for http-proxy. While http-proxy uses the latest version of Node.js, these tests were run on version 0.8.19 so that all dependencies are compatible. Node.js was cloned from github.



 git clone git://github.com/joyent/node.git cd node git checkout v0.8.19 ./configure make make install 


This installs the binary npm, so that we can install the dependencies of this project. Run npm install in the root of this repository and http-proxy and all dependencies will be installed automatically.



Nginx
Nginx is already a widespread server. It supports proxying to various backend servers, but did not support WebSockets. Not so long ago, this was added to the development branch of Nginx. Thus, we installed the latest development version and compiled from source:

Please note that since testing and writing this article, nginx 1.4.0 has been released, which has support for WebSockets. So if you are reading this article and are planning to deploy it in production, my advice is to use version 1.4.0. instead of developer versions.



 wget http://nginx.org/download/nginx-1.3.15.tar.gz tar xzvf nginx-1.3.15.tar.gz cd nginx-1.3.15 ./configure --with-http_spdy_module --with-http_ssl_module \ --pid-path=/var/run/nginx.pid --conf-path=/etc/nginx/nginx.conf \ --sbin-path=/usr/local/sbin --http-log-path=/var/log/nginx/access.log \ --error-log-path=/var/log/nginx/error.log --without-http_rewrite_module 


As you can see from these options, we enabled SSL, SPDY and used some other settings. In the end, this general configuration came out:



 Configuration summary + PCRE library is not used + using system OpenSSL library + md5: using OpenSSL library + sha1: using OpenSSL library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/sbin" nginx configuration prefix: "/etc/nginx" nginx configuration file: "/etc/nginx/nginx.conf" nginx pid file: "/var/run/nginx.pid" nginx error log file: "/var/log/nginx/error.log" nginx http access log file: "/var/log/nginx/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" 


Thereafter:



 make make install 


Haproxy
HAProxy and previously knew how to proxy WebSockets in tcp mode, and now in http mode. HAProxy also received support for HTTPS termination. So again, we need to establish a development brunch.



 wget http://haproxy.1wt.eu/download/1.5/src/devel/haproxy-1.5-dev18.tar.gz tar xzvf haproxy-1.5-dev18.tar.gz cd haproxy-1.5-dev18 make TARGET=linux26 USE_OPENSSL=1 make install 


Stud
Although HAProxy has the option of SSL termination, stud is usually used for SSL termination before HAProxy. And this we also want to check.



 git clone git://github.com/bumptech/stud.git cd stud make make install 


Now that everything is installed, you need to configure the configuration files. For Nginx, you can copy nginx.conf from the root of this repository to /etc/nginx/nginx.conf. Other proxies can be configured on the fly.



Kernel configuration



After installing all the proxies, some tuning of the sockets is required. This information I pulled from the Internet:



 vim /etc/sysctl.conf 


And the following values ​​are set:



 # General gigabit tuning: net.core.somaxconn = 16384 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_syncookies = 1 # this gives the kernel more memory for tcp # which you need with many (100k+) open socket connections net.ipv4.tcp_mem = 50576 64768 98152 net.core.netdev_max_backlog = 2500 




Benchmarking



2 different tests are carried out:

1. Load test proxy servers without SSL. In this case, we only test the performance of WebSockets proxying.

2. Load test proxy servers with SSL. Do not use unprotected WebSockets, since they have a very bad connection in browsers. But here an extra load is added in the SSL termination process to the proxy server.

In addition to our two tests, we try a different number of compounds:

- 2k

- 5k

- 10k

And for the same results also:

- 20k

- 30k

Before each test, all WebSocket servers are reset, and proxies are reinitialized. Thor loads all proxy servers with X number of connections with 100 simultaneous connections. For each established connection, one UTF-8 message is sent and received. After the message is received, the connection is closed.



Launch



Stud


 stud --config stud.conf 


Haproxy


 haproxy -f ./haproxy.cfg 


Nginx


 nginx 


http proxy


 FLAVOR=http node http-proxy.js 


WebSocketServer


 FLAVOR=http node index.js 


results



http-proxy justified its name, it proxies requests and does it quickly enough. But since It is based on Node.js, it eats up a lot of memory. Even the simplest node process requires 12+ MB of memory. For 10k requests it took about 70 MB of memory. When compared to the control test, the HTTP proxy took 5 seconds more. HTTPS, as expected, showed the slowest result, because Node.js with a bang loses over SSL. And this is not to mention the fact that, being under a serious load, it completely stops your main loop (event loop).

There is a pull request for http-proxy, which significantly reduces memory usage. I manually applied the patch, and as a result, the memory eaten down was halved. But still, even after the patch, it uses more memory than Nginx, which is easily explained by writing the latter in pure C.

I had high hopes for Nginx, and he did not disappoint me. He used no more than 10 MB of memory, and really worked very quickly. The first time I tested Nginx, it showed awesome performance. Node showed even faster results with SSL than Nginx, and I felt that there must be some kind of error, I must have made a mistake in setting up Nginx. After a couple of hints from friends, I really changed one line in the config - there were incorrect encryption settings. Small setup and confirmation using openssl s_client -connect server: ip fixed everything (now, by default, really fast RC4 encryption is used).

The next was NARroxy, which showed the same performance as NGINX, but required less (7 MB) of memory. The biggest difference was when testing for HTTPS: it was very slow, not even close compared with Nginx. We hope that this will be corrected, because we have only tested the developer brunch. Then I made the same mistake as with Nginx, incorrectly configured encryption, which I was correctly noted on HackerNews . In addition to HTTPS testing, we installed stud in front of it to verify the performance shown.



findings



http-proxy is a great flexible proxy, easily extensible and writeable. When used in production, I would advise running studs for SSL termination before it.

nginx and haproxy showed very close results, it is difficult to say that one of them would be faster or better. But if you look at them from the point of view of administration, it is easier to deploy and work with one nginx than with stud and haproxy.



HTTP
ProxyConnectionsHandshaken (medium)Latency (medium)Total
http proxy10k293 ms44 ms30168 ms
nginx10k252 ms16 ms28433 ms
haproxy10k209 ms18 ms26974 ms
control10k189 ms16 ms25310 ms
Winner : Nginx and HAProxy are really fast and their results are close.



Https
ProxyConnectionsHandshaken (medium)Latency (medium)Total
http proxy10k679 ms62 ms68670 ms
nginx10k470 ms30 ms50180 ms
haproxy10k464 ms25 ms50058 ms
haproxy + stud10k492 ms42 ms52403 ms
control10k703 ms65 ms71500 ms
Winner : Nginx and HAProxy are really fast and their results are close.



All test results are available at: https://github.com/observing/balancerbattle/tree/master/results



Contributions



All configurations are in the repository, I would be very happy to check if we can get better performance of our servers.

Source: https://habr.com/ru/post/179629/



All Articles