If you use Socket.IO or Faye with WebSockets, and want to use reverse proxy with Nginx, then you will encounter a problem with Nginx's WebSocket support. It simply does not exist - WebSocket uses HTTP 1.1, while Nginx can only proxy HTTP 1.0 correctly.
What to do?
You can try to go around it - use HAProxy to proxying tcp connections, or go to Long-polling.
But there is a way to implement reverse proxying with NGINX, using an unofficial patch that implements the tcp_proxy module in nginx, which will make it possible to forward arbitrary tcp connections (essentially the same thing as HAProxy).
Compiling nginx with tcp_proxy module
... in general, will look like this:
export NGINX_VERSION=1.0.4
curl -O nginx.org/download/nginx-$NGINX_VERSION.tar.gz
git clone github.com/yaoweibin/nginx_tcp_proxy_module.git
tar -xvzf nginx-$NGINX_VERSION.tar.gz
cd nginx-$NGINX_VERSION
patch -p1 < ../nginx_tcp_proxy_module/tcp.patch
./configure --add-module=../nginx_tcp_proxy_module/
sudo make && make install
')
The code is given taking into account the fact that you have installed the build tools and all the nginx dependencies are satisfied. For example,
sudo apt-get install curl build-essential git-core
sudo apt-get build-dep nginx
I note that instead of make install, it makes sense for ubuntu-server owners to do a checkinstall to install nginx as a package, and also add the necessary options for additional modules and paths to the logs and configuration, for example, as described in
Q & A.Proxy configuration
Let's create a simple vhost for the case when we want to forward port 80 to ports 8001, 8002, 8003, 8004 of our backends (which, for example, are node.js servers with faye or socket.io modules) with load balancing.
And on the 9000 port of localhost we will display debug information on the status of the work of the proxy.
tcp {
upstream websockets {
## node processes
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
server 127.0.0.1:8004;
check interval=3000 rise=2 fall=5 timeout=1000;
}
server {
listen 0.0.0.0:80;
server_name ;
tcp_nodelay on;
proxy_pass websockets;
}
}
http {
## status check page for websockets
server {
listen 9000;
location /websocket_status {
check_status;
}
}
}
By running backends, we can observe a joyful picture:

When implementing multiple backends, you need to understand that there are no guarantees that the client will always connect to the same backend (for example, the node.js server), because you need to think through the correct way to manage sessions in the cluster (for example, use redis).
Good luck!
Ps. Based on
an article by Johnathan Leppert