We all are accustomed to using ICQ, many of them implement this functionality in their projects, someone uses a database, or a queue server, for example, memcacheq. There are ready-made solutions, such as eJabber.
If you are interested in how you can do it yourself, then wellcom under castes, where the server part of the Instant Messaging Service will be reviewed. From the client, I hope, figure it out for yourself ...
The essence of “Short message service is basically a queue server running HTTP, which should be easily integrated with any JS framework. All results are displayed in JSON. The exchange with the server is carried out from the browser via AJAX.
')
using the POST method, we write data by the key, which is the url (or part of it). Using the GET method, we retrieve from the queue (by the key, which is the url) what was recorded. So we have: one Heshtablitsu, the key in which are uri, and the data - the queue. The queue represents a message string. If you wish, you can add time. Although there are many ideas for development.
So closer to the topic:
- #include <sys / types.h>
- #include <sys / time.h>
- #include <sys / queue.h>
- #include <stdlib.h>
- #include <err.h>
- #include <event.h>
- #include <evhttp.h>
- #include <map>
- #include <string>
- #include <queue>
- using namespace std ;
- std :: map < string, queue < string >> ht ;
- void generic_handler ( struct evhttp_request * req, void * arg )
- {
- struct evbuffer * buf ;
- buf = evbuffer_new ( ) ;
- if ( buf == NULL )
- err ( 1 , "failed to create response response" ) ;
- string key = evhttp_request_uri ( req ) ;
- string out ;
- if ( req - > type == EVHTTP_REQ_POST ) {
- const char * str_len = evhttp_find_header ( req - > input_headers, "Content-Length" ) ;
- int len = atoi ( str_len ) ;
- out. assign ( ( const char * ) EVBUFFER_DATA ( req - > input_buffer ) , len ) ;
- if ( ht. find ( key ) == ht. end ( ) ) {
- queue < string > q ;
- q. push ( out ) ;
- ht. insert ( pair < string, queue < string >> ( key, q ) ) ;
- } else
- ht [ key ] . push ( out ) ;
- evbuffer_add_printf ( buf, "{ \" result \ " : \" Ok \ " } \ r \ n " ) ;
- } else {
- if ( ht. find ( key ) == ht. end ( ) ) {
- evbuffer_add_printf ( buf, "{ \" result \ " : null} \ r \ n " ) ;
- } else {
- queue < string > q = ht [ key ] ;
- if ( q. size ( ) ) {
- out = q. front ( ) ;
- q. pop ( ) ;
- ht [ key ] = q ;
- evbuffer_add_printf ( buf, "{ \" result \ " : \" % s \ " }" , out. c_str ( ) ) ;
- } else {
- evbuffer_add_printf ( buf, "{ \" result \ " : null}" )
- }
- }
- }
- evhttp_send_reply ( req, HTTP_OK, "OK" , buf ) ;
- }
- int main ( int argc, char ** argv )
- {
- struct evhttp * httpd ;
- event_init ( ) ;
- httpd = evhttp_start ( "0.0.0.0" , 8080 ) ;
- evhttp_set_gencb ( httpd, generic_handler, NULL ) ;
- event_dispatch ( ) ;
- evhttp_free ( httpd ) ;
- return 0 ;
- }
a few explanations on the code:
lines 58-63 WEB server initialization. The basis for taking WEB server based on libevent. He has excellent performance. On my 2.3GHz laptop, it gives a performance of 2k qps. Any URL is processed.
pp 20-22 - initialize buffer
page 23 we get REQUEST_IRI and use for the key. There are many suggestions for optimization.
p 26 we check for POST. Sure, you can still check on HEAD (other methods evhttp does not support). We will not complicate life yet.
pp 28-30 we form variable in which the data is stored. Since garbage accumulates in the buffer, we write as many bytes as indicated in the Content-Length header.
pp. 30-35 If the key does not exist, then we start a new queue and insert a data item into the queue
page 36 otherwise just insert the data item into the queue
p 38 - fulfills the GET method
page 39 checks if the key exists
page 40 - no - display a message about the empty result
p 42 - get data by key
page 43 - check whether the queue is empty,
pages 44-47 - no, select a message from the queue and output it, the queue is reduced by one message
It is possible to slightly flame, that it is necessary to make an escaping. Yes, be sure to add it.
p 49 yes, the queue is empty, we report about it.
page 53 finalize the request, send the response code 200 OK
It should be noted that the model is single-threaded, so there is no need to make any locks on the record. Although this question will also be worked out by me.
Ab results
Concurrency Level: 3
Time taken for tests: 0.415 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 83000 bytes
HTML transferred: 19000 bytes
Requests per second: 2409.31 [#/sec] (mean)
Time per request: 1.245 [ms] (mean)
Time per request: 0.415 [ms] (mean, across all concurrent requests)
Transfer rate: 195.29 [Kbytes/sec] received
The amount of memory consumed per 10,000 messages - just over 600K
In fact, it is planned to deliver for nginx, let it be responsible for security (ngx_http_accesskey_module)
config file:
location /test {
proxy_pass 127.0.0.1:8080;
## ngx_http_accesskey_module
}
but through nginx performance reaches 800 rps
There are many ideas - how and where to go next. For example, display of activity status, antispam.
Any other ideas are welcome. While I am sitting without work, there is no project on which I can test it. According to my calculations, at the same time about 10 thousand clients will calmly pull if inquiries with a frequency of 1 -1.5 minutes (130-200 rps) are made.