📜 ⬆️ ⬇️

C ++ web application, or taming the FastCGI daemon

Nowadays, thanks to tools like NodeJS, creating a web application is nothing. I downloaded the binary, js in 5 lines of code, and you can brag. And if you connect express and add another 5 lines, then we get a full-fledged web application with routing, templates, sessions and other amenities. So simple that even boring. And it became interesting to me: how is the situation with my old acquaintance C ++, whom I have not seen for 5 years now? At one time, I was seduced by ActionScript and other JavaScript, but I completely forgot about a good friend who helped out more than once. In light of recent articles on Configurable Omnipotent Custom Applications Integrated Network Engine (abbreviated Cocaine), a project called Fastcgi Daemon, based on the Cocaine HTTP interface, caught my eye. And so, meet

Fastcgi Daemon - Yandex's opensCodCGI applications on C ++.

I.e

Fastcgi Daemon is an open source framework developed in Yandex and designed to create highly loaded FastCGI applications in C ++.

Unfortunately, this is all that you find in the README from the official repository.
More documentation is available here at github.com/lmovsesjan/Fastcgi-Daemon/wiki/_pages , but for full-fledged work it is not enough. For example, the entire installation there takes one line:

sudo apt-get install fastcgi-daemon2-init libfastcgi-daemon2-dev libfastcgi2-syslog 

But in the official repositories of Ubuntu, and in any other (maybe I looked badly), these packages could not be found. In this article, I decided to collect my research related to installing, configuring and using this tool. All the sources below, as well as ready-made deb-files can be picked up here github.com/nickalie/HelloFastCGI
')

Installation


The project is supported for Ubuntu, so I performed all the operations below on Ubuntu 12.04 64 bit with fresh updates.
First, install all necessary dependencies. You can do this with the following command:

 sudo apt-get install -y build-essential git debhelper automake1.9 autotools-dev libboost-dev libboost-thread-dev libfcgi-dev libxml2-dev libboost-regex-dev libtool libssl-dev autoconf-archive 

Now we clone the repository with Fastcgi Daemon. Here we have a choice of 2 options:

I chose the first option

 git clone https://github.com/golubtsov/Fastcgi-Daemon.git 

Go to the folder with the fresh project

 cd Fastcgi-Daemon 

and run the build

 dpkg-buildpackage -rfakeroot 

At the output we get ready-to-install deb-files that are in the parent directory. Do

 cd .. 

and

 sudo dpkg -i ./libfastcgi-daemon2-dev_2.10-13_amd64.deb \ ./libfastcgi-daemon2_2.10-13_amd64.deb \ ./fastcgi-daemon2-init_2.10-13_amd64.deb \ ./fastcgi-daemon2_2.10-13_amd64.deb \ ./libfastcgi2-syslog_2.10-13_amd64.deb 

We also need a web server that can work with FastCGI. I use nginx for my needs, the same is advised in the documentation

 sudo apt-get install nginx 

If you want to use the latest version of this web server, before doing this

 sudo add-apt-repository ppa:nginx/stable && sudo apt-get update 

This is the end of the installation. We turn to our first application.

application


Create the HelloFastCGI.cpp file and put the following code into it:

 #include <fastcgi2/component.h> #include <fastcgi2/component_factory.h> #include <fastcgi2/handler.h> #include <fastcgi2/request.h> #include <iostream> #include <sstream> class HelloFastCGI : virtual public fastcgi::Component, virtual public fastcgi::Handler { public: HelloFastCGI(fastcgi::ComponentContext *context) : fastcgi::Component(context) { } virtual void onLoad() { } virtual void onUnload() { } virtual void handleRequest(fastcgi::Request *request, fastcgi::HandlerContext *context) { request->setContentType("text/plain"); std::stringbuf buffer("Hello " + (request->hasArg("name") ? request->getArg("name") : "stranger")); request->write(&buffer); } }; FCGIDAEMON_REGISTER_FACTORIES_BEGIN() FCGIDAEMON_ADD_DEFAULT_FACTORY("HelloFastCGIFactory", HelloFastCGI) FCGIDAEMON_REGISTER_FACTORIES_END() 

The method that interests us the most is handleRequest. That he is engaged in processing the request. I hope from the code it is clear what happens in this method, but I will explain just in case. If the request (POST or GET) has the parameter “name”, then we display the text “Hello% name%”, otherwise “Hello stranger”.

The fastcgi :: Request class is responsible for both the request and the response at the same time, although usually this functionality is divided into 2 classes or objects, as, for example, in the same NodeJS.
Already “out of the box” we can work with cookies, set arbitrary HTTP statuses, headers, etc. In general, we have access to a gentlemen’s set for developing web services and web applications. Unless sessions are not implemented by default, but they are screwed into 2 accounts. This will tell next time.

Let us return to our sheep. Now we need to compile this class into a shared library:

 g++ HelloFastCGI.cpp -O2 -fPIC -lfastcgi-daemon2 -shared -o libHelloFastCGI.so 

Then you should prepare the HelloFastCGI.conf configuration file (it is important that the extension be “conf”):

 <?xml version="1.0"?> <fastcgi xmlns:xi="http://www.w3.org/2001/XInclude"> <pools> <pool name="main" threads="1" queue="5000"/> </pools> <handlers> <handler pool="main" url="/hellofascgi"> <component name="HelloFastCGIComponent"/> </handler> </handlers> <components> <component name="HelloFastCGIComponent" type="MainModule:HelloFastCGIFactory"/> <component name="daemon-logger" type="logger:logger"> <level>INFO</level> <ident>hellofastcgi</ident> </component> </components> <modules> <module name="MainModule" path="./libHelloFastCGI.so"/> <module name="logger" path="/usr/lib/fastcgi2/fastcgi2-syslog.so"/> </modules> <daemon> <logger component="daemon-logger"/> <endpoint> <backlog>128</backlog> <socket>/tmp/fastcgi_daemon.sock</socket> <threads>1</threads> </endpoint> <pidfile>/var/run/fastcgi2/HelloFastCGI.pid</pidfile> <monitor_port>20012</monitor_port> </daemon> </fastcgi> 


We have a handler for requests that come to “/ hellofastcgi” (for example, www.somedomain.com/hellofastcgi ). This handler is the HelloFastCGIComponent component, which lies in the MainModule module. More precisely, the module contains the HelloFastCGIFactory factory, which allows you to get the necessary component. MainModule, in turn, draws its resources from the newly compiled libHelloFastCGI.so. Also pay attention to the contents of the “socket” tag - this is nothing more than a unix-socket, which we will soon need to specify in the nginx settings. “Pidfile” is important when demonizing FastCGI-Daemon. Its name must match the name of the conf-file, the only difference is in extensions. In order for the daemon to be able to start / stop / restart, the pid must be in “/ var / run / fastcgi2 /”.

It's time to set up a web server. Since I am doing all this on a freshly installed nginx, without further ado, I rule / etc / nginx / sites-available / default. The content should be something like:

 server { listen 80; location / { include fastcgi_params; fastcgi_param SCRIPT_FILENAME $fastcgi_script_name; fastcgi_pass unix:/tmp/fastcgi_daemon.sock; } } 

Restart nginx

 sudo service nginx restart 

Run our application with

 fastcgi-daemon2 --config=HelloFastCGI.conf 

If everything was done correctly, then open localhost / hellofastcgi in the browser, you should see

 Hello stranger 

Add the argument localhost / hellofastcgi? Name = nick and get

 Hello nick 

Hurray, it works! I hope you have too.

However, now Fastcgi Daemon runs in the console as a normal application. Where is the promised demonization? We will talk about this below.

Setting up daemon


Fortunately, Fastcgi Daemon and Daemon, that it is very easy to demonize (I apologize for the tautology).

We take the previously created HelloFastCGI.conf, replace the relative path to libHelloFastCGI.so with an absolute path in it and put it in / etc / fastcgi2 / available.

Now you can start / stop / restart the daemon in the usual way:

 sudo service fascgi-daemon2 start/stop/restart <appname>/all 

In our case it will be

 sudo service fastcgi-daemon2 start HelloFastCGI 

To run all available applications in / etc / fastcgi2 / available use the keyword “all”

 sudo service fastcgi-daemon2 start all 

It is also important that in the event of an unexpected crash of your application, it will be automatically restarted.

Benchmarks


Curiosity decided to compare the performance of Fastcgi Daemon and NodeJS. To do this, sketched a similar application on JS:

 var http = require('http'); var url = require('url'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); var query = url.parse(req.url, true).query; res.end('Hello ' + (query.name ? query.name : 'stranger')); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); 

For the purity of the experiment, I configured proxy_pass in nginx to work with node.
Tested with Apache Bench:

 ab -c 100 -n 20000 http://IPorURL/hellofastcgi?name=Nikolay 

and

 ab -c 100 -n 20000 http://IPorURL/?name=Nikolay 

Results:

Fastcgi daemon

 Concurrency Level: 100 Time taken for tests: 15.181 seconds Complete requests: 20000 Failed requests: 0 Write errors: 0 Non-2xx responses: 20000 Total transferred: 6460000 bytes HTML transferred: 3440000 bytes Requests per second: 1317.45 [#/sec] (mean) Time per request: 75.904 [ms] (mean) Time per request: 0.759 [ms] (mean, across all concurrent requests) Transfer rate: 415.56 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.5 0 4 Processing: 12 75 31.2 68 474 Waiting: 9 73 31.4 66 471 Total: 12 76 31.3 68 475 Percentage of the requests served within a certain time (ms) 50% 68 66% 80 75% 85 80% 88 90% 96 95% 106 98% 114 99% 125 100% 475 (longest request) 


Nodejs

 Concurrency Level: 100 Time taken for tests: 23.038 seconds Complete requests: 20000 Failed requests: 0 Write errors: 0 Total transferred: 2700000 bytes HTML transferred: 260000 bytes Requests per second: 868.12 [#/sec] (mean) Time per request: 115.192 [ms] (mean) Time per request: 1.152 [ms] (mean, across all concurrent requests) Transfer rate: 114.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.6 0 52 Processing: 39 114 21.7 109 306 Waiting: 28 112 21.5 107 305 Total: 40 115 21.7 109 306 Percentage of the requests served within a certain time (ms) 50% 109 66% 117 75% 125 80% 130 90% 145 95% 155 98% 168 99% 186 100% 306 (longest request) 

It is worth mentioning that all this is spinning on VirtualBox, which is allocated one core from Core i7.
As a result, we got a difference of one and a half times in favor of Fastcgi Daemon, which, I think, is not so bad for NodeJS. That's just the CPU consumption at NodeJS reached 50%, and the memory - up to 45 MB (5.6 MB at rest). While Fastcgi Daemon ate up no more than 20% of the processor and 9.5 MB of RAM (4.5 MB at rest). That is, to the resources of the latter is more friendly, which is not surprising. It's clear that the comparison turned out to be quite spherical in a vacuum. In an amicable way, it is necessary to make more saturated code, connect the database, run both applications in multithreaded mode. But for the beginning and that's enough.

Instead of conclusion


In my opinion, a very interesting project came out of the depths of Yandex. Besides the fact that Fastcgi Daemon greatly simplifies writing web applications in C ++, it also contains the necessary init.d scripts for convenient management of ready-made applications. Next time I will describe the creation of a service for authorization based on Fastcgi Daemon with sessions, databases and templates.

Source: https://habr.com/ru/post/216181/


All Articles