⬆️ ⬇️

We write the load tester on Node.js

In the post we will talk about writing a utility for load testing HTTP services on Node.js, as well as a description of the tool itself and the area of ​​its use.



Prehistory




Over the weekend it was necessary to urgently carry out our service load. First of all, I went to install Yandex Tank, but it turned out that the guys still have everything under Debian. Ok Google, I have a working machine on my Mac, I really didn’t want to install a virtual machine for this, so I went to the test server, where there was trouble with dependencies and not enough memory. I did not want to knock at the administrator at the weekend, and my hands itch more and more to write a simple and interesting utility myself. So appeared Stress .

I do not discourage you from a tank or jMeter, but if you need a quick and simple (set-up) tool, I hope it will be useful to you.



Why Node.js?


')

First, the asynchronous language will help us to simplify the writing of code as much as possible for the simultaneous execution of queries on one core.

Secondly, a convenient built-in cluster for workers and a communication channel for them.

Thirdly, the built-in http-server and socket.io for reports in the browser.



Extensibility




A non-expandable tool is a dead tool. In our case, customization may be necessary for:







These are all modules of your particular strategy, which I decided to call an attacker. Attackers can be many and you can write yourself. Each attacker consists of the following modules:





Currently only one attacker Step has been created. His behavior is similar to the tank step , but this is enough for most tasks. He also writes all queries to the logs, aggregated results and draws a graph.



Code writing




It looks simple architecture is overshadowed by the need to work with parallel queries. As you know, Node.js has only one working thread and when you start a simultaneous large number of http requests, they will start to queue up, increasing the latency. Therefore, we immediately forge workers on the number of cores and communicate through the built-in JSON channel with messages.



Stress.prototype.fork = function (cb) { var self = this; var pings = 0; var worker; if (cluster.isMaster) { for (var i = 0; i < numCPUs; i++) { worker = cluster.fork(); worker.on("message", function (msg) { var data = JSON.parse(msg); if (data.type === "ping") { pings++; if (pings === self.workers.length) cb(); //   ,   } else { self.attack.masterHandler(data); //     } }); self.workers.push(worker); //      } } else { process.send(JSON.stringify({type: "ping"})); process.on("message", function (msg) { var data = JSON.parse(msg); if (data.taskIndex === undefined) { process.send("{}"); } else { workerInstance.run(data); //   } }); } }; 




Dispatcher is designed to evenly distribute requests among all cores.

In the constructor, we call this method in parallel with all the preparatory tasks in init:



 async.parallel([ this.init.bind(this), this.fork.bind(this) ], function () { if (cluster.isMaster) { self.next(); } }); 




The next method starts to iterate the tasks specified in the config file:



 Stress.prototype.next = function () { var task = this.tasks[this.currentTask]; if (!task) { console.log("\nDone"); process.exit(); } else { var attacker = this.attackers[task.attack.type]; this.attack = new attacker.dispatcher(this.workers, this.currentTask, this.attackers); this.attack.on("done", this.next.bind(this)); this.attack.run(); this.currentTask++; } }; 




Dispatcher, along with Reporter, manages everything related to the current task. By itself, the worker is quite simple and is a wrapper around the request.



 task.request.jar = request.jar(new FileCookieStore(config.cookieStore)); async.each(arr, function (_, next) { request(task.request, receiver.handle.bind(receiver, function (result) { result.pid = process.pid; result.reqs = reqs; result.url = task.request.url; result.duration = duration; reporter.logAll(result); next(); })); }, function () { process.send(JSON.stringify(receiver.report)); }); 




As you can see, all that lies in the request object is the options for the library of the same name, allowing you to use all its capabilities in the config. Also, when querying, tough-cookie-filestore is used, which will allow us to build requests chains from tasks, because for full testing it is often necessary to check the closed parts of the service for loads.



Among other things, Dispatcher can easily transfer data that the Reporter aggregated for it anywhere, for example, to the client, where Google Chart is waiting for them.



 Step.prototype.masterHandler = function (data) { this.answers++; if (Object.keys(data).length) this.summary.push(data); if (this.answers === this.workers.length) { var aggregated = this.attacker.reporter.logAggregate(this.summary); this.attacker.frontend.emit("data", { aggregated: aggregated, step : this.currentStep }); this.answers = 0; this.currentStep = this.currentStep + this.task.attack.step; this.run(); } }; 




If you remember to set webReport = true in the config and follow the link in the console, you can watch how latency grows with increasing RPS:







Install and Run




 git clone https://github.com/yarax/stress cd stress npm i npm start 




In the configs folder there is a default file with requests to google, in the same place you can create your config and run as



 npm start myConfigName 




I would be glad if someone would find the article useful, and also pull requests welcome :)

Source: https://habr.com/ru/post/249403/



All Articles