📜 ⬆️ ⬇️

Solving the problem "EMFILE, too many open files"

Good day.
To add 404 error messages to a separate log file is so common that it would seem there can be no difficulty with this. At least, I thought so, until in one second the client requested fifteen hundred missing files.
The Node.js server cursed "EMFILE, too many open files" and disconnected.
(In the debug mode, I do not specifically catch errors that fall into the main loop)

So, what was the f-Ia save to file:
log: function (filename, text) { //     filename  now() + text var s = utils.digitime() + ' ' + text + '\n'; // utils.digitime() -   -      .. :: fs.open(LOG_PATH + filename, "a", 0x1a4, function (error, file_handle) { if (!error) { fs.write(file_handle, s, null, 'utf8', function (err) { if (err) { console.log(ERR_UTILS_FILE_WRITE + filename + ' ' + err); } fs.close(file_handle, function () { callback(); }); }); } else { console.log(ERR_UTILS_FILE_OPEN + filename + ' ' + error); callback(); } }); } 

Well, that is, everything is straightforward “in the forehead” - we open it, write it down; if we make any errors, we output it to the console. However, as mentioned above, if you call it too often, the files simply do not have time to close. In Linux, for example, we rest on the value of kern.maxfiles with all the unpleasant consequences.

The most interesting

For the solution, I chose the async library, without which I can no longer imagine life.
The log function itself transferred the module's scope to “private”, renaming it into __log and slightly modifying it: now it has a callback:
 __log = function (filename, text) { return function (callback) { var s = utils.digitime() + ' ' + text + '\n'; fs.open(LOG_PATH + filename, "a", 0x1a4, function (error, file_handle) { if (!error) { fs.write(file_handle, s, null, 'utf8', function (err) { if (err) { console.log(ERR_UTILS_FILE_WRITE + filename + ' ' + err); } fs.close(file_handle, function () { callback(); }); }); } else { console.log(ERR_UTILS_FILE_OPEN + filename + ' ' + error); callback(); } }); }; }; 

')
Most importantly: in private we create the variable __writeQueue:
  __writeQueue = async.queue(function (task, callback) { task(callback); }, MAX_OPEN_FILES); 


and in the public part of the log module now looks quite simple:
  log: function (filename, text) { __writeQueue.push(__log(filename, text)); }, 

And everything ?!

Exactly. Other modules still call this function like
 function errorNotFound (req, res) { utils.log(LOG_404, '' + req.method + '\t' + req.url + '\t(' + (accepts) + ')\t requested from ' + utils.getClientAddress(req)); .. 

and no mistakes.

The mechanism is simple: we set the MAX_OPEN_FILES constant to a reasonable number less than the maximum allowed number of open file descriptors (for example, 256). Further, all recording attempts will be parallelized, but only until their number reaches the specified limit. All new arrivals will queue up and run only after previous attempts have been completed (remember, we added a callback? Just for that!).

I hope this article will help solve this problem to those who faced it. Or - even better - will serve as a preventive measure.
Good luck.

Source: https://habr.com/ru/post/158329/


All Articles