Web Sockets is a progressive standard for full-duplex (two-way) communication with the server over a TCP connection, compatible with HTTP. It allows you to organize live messaging between the browser and the web server in real time, and in a completely different way than the usual "request URL - response" scheme. When I looked at this standard two years ago, it was still in its infancy. There was only an unapproved draft draft and experimental support by some browsers and web servers, and it was disabled by default in Firefox due to security issues. However, now the situation has changed. The standard has acquired several revisions (including those without backward compatibility), received RFC status (
6455 ), and got rid of childhood diseases. In all modern browsers, including IE10, support for one of the protocol versions is claimed, and there are quite ready for industrial use web servers.
I decided it was time to try it on a live project. And now I share what came of it.
The essence of the task
On my personal small site Klavogonki.ru there is a central part - a list of the current games of the races. The list is extremely dynamic: players create a new race every few seconds, sometimes a few per second. Tours begin with a countdown, at the end they are moved from the open games section to the active section. After the release of all players check-in is removed from the page. In one run comes from one and sometimes up to one hundred players that you want to display right there.

')
How it worked before
Initially, when it became necessary to make this part of the site's functionality, many difficulties arose with the dynamic update of the list. There are dozens of people on the list page at the same time, each of whom wants to see a fresh picture. Many races can have a timeout from the creation to the start of just 10-20 seconds, and in order to be able to join them, the update must be quite lively.
Updating the entire page by timeout here did not fit at all in any way, and it was necessary to look for other options (here it was necessary to make a remark that there was no wish to use the flash on the site without a very strong need).
At first glance, the most obvious and simple solution seemed to be
long-polling - a hanging connection to the server, which breaks off at the moment when a new event arrives and is rediscovered. However, after some tests, this option also proved to be unviable: events flowed in a continuous stream, because the client needs to be informed not only about the creation of a new game, but also about changing the parameters of all games (for example, starting the timeout, changing the status, the composition of players), and the number of requests cause a certain degree of discontent with the server. And the overhead on opening and closing requests also came out rather big.
HTTP-streaming could not be used due to problems with proxy servers for many users.
Therefore, I stopped at a simple version of updating the page content by timeout every 3 seconds via ajax requests. On the server, the current data was cached and delivered to cached clients in json, while not all the list was saved to save traffic, but only changed data through the versioning system (the version increased compared to the requested one - we give new information about the arrival, otherwise we give only the current version number).
The system showed itself well and worked for a long time. However, there was a big minus - it is very difficult to enter the race with a 10-second timeout before the start. In addition, it was not at all consistent with the spirit of a dynamic racing online game and did not look too technologically as a whole.
You can see this page in its old version via this link .How it works now
In a nutshell, web sites allowed to drive into this whole process.
To begin with, a server was chosen that should live in conjunction with the current gaming backend. For a number of reasons, I chose for this
node.js - an event-oriented model and well-developed javascript callbacks were perfect for this task.
A common environment for communication between the php backend and the server on node.js is the
reds pubsub channels . When creating a new game or any action that modifies the data, php does something like this (the code here and further is greatly simplified):
$redis = new Redis(); $redis->pconnect('localhost', 6379); $redis->publish("gamelist", json_encode(array( "game created", array( 'gameId' => $id))));
Redis works as a separate daemon on a separate TCP port and accepts / sends messages from any number of connected clients. This makes it possible to scale the system well, regardless of the number of processes (and of servers, to be optimistic) php and node.js. About 50 php processes and 2 node.js processes are currently spinning.
On the node.js side, at startup, there is a connection to a redis-channel wiretap called
gamelist
:
var redis = require('redis').createClient(6379, 'localhost'), redis.subscribe('gamelist');
The
Socket.IO binding library is used for working with clients (
upd: comrades
Voenniy and
Joes say in the comments that there are better alternatives like
SockJS and
Beseda , which may well be true). It allows you to use web sockets as the main transport, while rolling back to other transports like flash or xhr-polling if the browser does not support web sockets. In addition, it simplifies working with clients as a whole, for example, provides an API for multiplexing and splitting connected clients into different pseudo-directories (channels), allows you to name events and some other goodies.
var io = require('socket.io').listen(80); var gamelistSockets = io.of('/gamelist');
When a client browser connects to
ws://ws.klavogonki.ru/gamelist
it is recognized as a gamelist connected to the socketio channel. The browser for this does the following:
<script src="http://ws.klavogonki.ru/socket.io/socket.io.js" type="text/javascript"></script> ... <script type="text/javascript"> var socket = io.connect('ws.klavogonki.ru/gamelist'); </script>
When an event from the backend arrives on the redis channel, it is analyzed in advance in every possible way and then sent to all connected clients in
gamelistSockets
:
redis.on('message', function(channel, rawMsgData) { if(channel == 'gamelist') { var msgData = JSON.parse(rawMsgData); var msgName = msgData[0]; var msgArgs = msgData[1]; switch(msgName) { case 'game created': { ... gamelistSockets.emit('game created', info); break; } case 'game updated': { ... gamelistSockets.emit('game updated', info); break; } case 'player updated': { ... gamelistSockets.emit('player updated', info); break; } } } });
The browser receives the event in exactly the same way and renders the necessary changes on the page.
socket.on('game created', function(data) { insertGame(data); }); socket.on('game updated', function(data) { updateGame(data); }); socket.on('player updated', function(data) { updatePlayer(data); });
The principle is completely simple and clear. Advanced technologies at the heart of this scheme make it possible to greatly simplify the process and focus on the logic of the work itself. Although I had to tinker a bit with reworking some parts of the php code to work in the ideology of “reporting a change, not about the state”, and also to transfer the domain of web sockets to a separate machine from the main machine (so as not to suffer with the sharing proxy on port 80), but the bottom line pluses came out very significant:
- The highest dynamics of the interface, the update takes place in real time, you can track individual changes and feel in an online game, and not on the chat page of the 90s.
- There is almost no need for caching, because the data goes in transit from the backend directly to the browser.
- Organic saving of traffic on sending only the necessary state changes (if you try to fasten the compression, it will be even more interesting).
- The growth of the network load is almost imperceptible, since node.js was designed just to keep and process any conceivable number of simultaneous connections; and the growth of the load on the CPU even fell, because the miscalculation of the state change is done once on the backend and is sent to all customers as a finished product.
- The event-oriented scheme gives you the opportunity to know about all the moments of changes in the data and, for example, to do the animation of floating and floating at the same time.
Solid profit, in short.
Look at what happened in the end, you can here . The difference is visible to the naked eye.
As a bonus, two tablets, small statistics on the audience Klavogonok, browsers and transports used in Socket.IO:
Browser | Share | Transport | Share |
---|
Chrome | 51% | websocket | 90% |
Firefox | 20% | xhr-polling | five% |
Opera | 15% | flashsocket | four% |
IE (roughly in half 8 and 9) | 6% | jsonppolling | one% |
As you can see, it is quite ready for use.
Total
There could be a concluding summary with a summary, bibliography and morality. But I will save your time and I will say simply:
websockets are very cool!PS In the newly developed parts of the project (including the one described above), such interesting words as mongodb and angular.js are also used . If there is interest, then the following topics will be on this topic.