Remark from the author
This article is new, but it is not about new features. It is about the core, that is, about the platform and the fact that many who simply use grunt, or the webpack may be unaware, so skazat about the fundamentals.
Read more:
')
rumkin comments:
habrahabr.ru/company/mailru/blog/283228/#comment_8890604
Aiditz comments:
habrahabr.ru/company/mailru/blog/283228/#comment_8890476
Suvitruf comments:
habrahabr.ru/company/mailru/blog/283228/#comment_8890430
code
folder.println
deferred function: System.out.println("Step: 1"); System.out.println("Step: 2"); Thread.sleep(1000); System.out.println("Step: 3");
console.log('Step: 1') setTimeout(function () { console.log('Step: 3') }, 1000) console.log('Step: 2')
setTimeout
. Take a look at this snippet: console.log('Step: 1') setTimeout(function () { console.log('Step: 3') console.log('Step 5') }, 1000); console.log('Step: 2') console.log('Step 4')
setTimeout
places its callback in future periods of the event cycle.for … while
. It stops only when there is nothing more to do, either now or in the future. console.log('Step: 1') var start = Date.now() for (var i = 1; i<1000000000; i++) { // This will take 100-1000ms depending on your machine } var end = Date.now() console.log('Step: 2') console.log(end-start)
fs
module (file system) comes with two sets of methods. Each pair does the same thing, but in different ways. The blocking methods in the fs
module have the word Sync
in their names: var fs = require('fs') var contents = fs.readFileSync('accounts.txt','utf8') console.log(contents) console.log('Hello Ruby\n') var contents = fs.readFileSync('ips.txt','utf8') console.log(contents) console.log('Hello Node!')
data1->Hello Ruby->data2->Hello NODE!
var fs = require('fs'); var contents = fs.readFile('accounts.txt','utf8', function(err,contents){ console.log(contents); }); console.log('Hello Python\n'); var contents = fs.readFile('ips.txt','utf8', function(err,contents){ console.log(contents); }); console.log("Hello Node!");
contents
are displayed last, because their execution takes some time, they are also in callbacks. The event loop will go to them upon completion of reading the file: Hello Python->Hello Node->data1->data2
window
)?global.process
: process, system, environment information (you can access CLI input data, environment variables with passwords, memory, etc.)global.__filename
: the file name and the path to the currently running script that contains this expression.global.__dirname
: the full path to the currently running script.global.module
: an object for exporting code that creates a module from this file.global.require()
: method for importing modules, JSON files and folders.global.console()
global.setInterval()
global.setTimeout()
GLOBAL
name typed in capital letters, or without a name at all, simply by writing process
instead of global.process
.process.pid
: The process ID of this Node instance.process.versions
: different versions of Node, V8 and other componentsprocess.arch
: system architectureprocess.argv
: CLI argumentsprocess.env
: environment variablesprocess.uptime()
: get running timeprocess.memoryUsage()
: gets the amount of memory consumedprocess.cwd()
: get the current working folder. Not to be confused __dirname
, independent of the place from which the process was launched.process.exit()
: exits the current process. For example, you can pass the code 0 or 1.process.on()
: attaches to the event, for example, ʻon ('uncaughtException') fs.readdir(source, function (err, files) { if (err) { console.log('Error finding files: ' + err) } else { files.forEach(function (filename, fileIndex) { console.log(filename) gm(source + filename).size(function (err, values) { if (err) { console.log('Error identifying file size: ' + err) } else { console.log(filename + ' : ' + values) aspect = (values.width / values.height) widths.forEach(function (width, widthIndex) { height = Math.round(width / aspect) console.log('resizing ' + filename + 'to ' + height + 'x' + height) this.resize(width, height).write(dest + 'w' + width + '_' + filename, function(err) { if (err) console.log('Error writing file: ' + err) }) }.bind(this)) } }) }) } })
var events = require('events') var emitter = new events.EventEmitter()
emitter.on('knock', function() { console.log('Who\'s there?') }) emitter.on('knock', function() { console.log('Go away!') }) emitter.emit('knock')
EventEmitter
do something useful, inheriting it from it. Suppose you regularly need to implement a class - monthly, weekly, or every day. This class must be flexible enough for other developers to customize the final result. In other words, at the end of your work, everyone should be able to put some kind of logic into the class.Job
inherit from the event module, and then use the done
event receiver to change the behavior of the Job
class:Job
class will retain its properties, but at the same time will receive events. At the end of the process, we just need to run the done
event: // job.js var util = require('util') var Job = function Job() { var job = this // ... job.process = function() { // ... job.emit('done', { completedOn: new Date() }) } } util.inherits(Job, require('events').EventEmitter) module.exports = Job
Job
behavior. Once it sends done
, then we can attach the event receiver: // weekly.js var Job = require('./job.js') var job = new Job() job.on('done', function(details){ console.log('Job was completed at', details.completedOn) job.removeAllListeners() }) job.process()
emitter.listeners(eventName)
: lists all recipients for this event.emitter.once(eventName, listener)
: attaches a one-time event listener.emitter.removeListener(eventName, listener)
: deletes the event receiver.process.stdin
, which is the standard input stream. It contains data that goes into the application. This is usually keyboard information used to start the process.data
and end
events are used to read data from stdin
. The callback of the data
event will have a chunk
as an argument: process.stdin.resume() process.stdin.setEncoding('utf8') process.stdin.on('data', function (chunk) { console.log('chunk: ', chunk) }) process.stdin.on('end', function () { console.log('--- END ---') })
chunk
fed to the program as input. This event can be activated several times, depending on the total amount of incoming information. The completion of stream must be signaled using the end
event.stdin
is paused by default, from which it must be output before reading data from it.read()
interface. When the stream ends, it returns chunk
or null
. We can take advantage of this behavior by putting the construction null !== (chunk = readable.read())
in a while
condition null !== (chunk = readable.read())
: var readable = getReadableStreamSomehow() readable.on('readable', () => { var chunk while (null !== (chunk = readable.read())) { console.log('got %d bytes of data', chunk.length) } })
readable.read()
blocks thread.process.stdout
, which is the standard output stream. It contains data that leaves the application. You can write to stream using the write
operation. process.stdout.write('A simple message\n')
console.log()
.pipe()
method. The following example reads data from a file, compresses with GZip and writes the result to a file: var r = fs.createReadStream('file.txt') var z = zlib.createGzip() var w = fs.createWriteStream('file.txt.gz') r.pipe(z).pipe(w)
Readable.pipe()
takes the data stream and passes through all the streams, so we can create chains from the pipe()
methods.data
event and take a chunk
in its callback, which you can immediately convert without waiting for the entire response. In the following example, we concatenate the body
and parse it in the end
callback event: const http = require('http') var server = http.createServer( (req, res) => { var body = '' req.setEncoding('utf8') req.on('data', (chunk) => { body += chunk }) req.on('end', () => { var data = JSON.parse(body) res.write(typeof data) res.end() }) }) server.listen(1337)
()=>{}
is a new syntax for anonymous functions, and const
is a new operator. If you are not familiar with the features and syntax of ES6 / ES2015, then you can read the article Top 10 ES6 properties that every busy JavaScript developer should know about ./stream
and /non-stream
. app.get('/non-stream', function(req, res) { var file = fs.readFile(largeImagePath, function(error, data){ res.end(data) }) }) app.get('/stream', function(req, res) { var stream = fs.createReadStream(largeImagePath) stream.pipe(res) })
/stream2
with events, and a synchronous implementation of /non-stream2
. They do the same thing, but they use a different syntax and style. In this case, synchronous methods work faster because we send only one request, and not several competing ones. $ node server-stream
X-Response-Time
. In my case, /stream
and /stream2
differed by orders of magnitude: 300 ms. and 3-5 seconds. You may have other values, but the idea is clear: in the case of /stream
users / clients will start receiving data earlier. Streaming in Node is a very powerful tool! You can learn how to manage your streaming resources well by becoming an expert in this field on your team. $ sudo npm install -g stream-adventure $ stream-adventure
new Buffer(size)
new Buffer(array)
new Buffer(buffer)
new Buffer(str[, encoding])
utf8
.toString()
. Create a buffer with the alphabet using the for
loop: let buf = new Buffer(26) for (var i = 0 ; i < 26 ; i++) { buf[i] = i + 97 // 97 is ASCII a }
console.log(buf) // <Buffer 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70 71 72 73 74 75 76 77 78 79 7a>
buf.toString('utf8') // outputs: abcdefghijklmnopqrstuvwxyz buf.toString('ascii') // outputs: abcdefghijklmnopqrstuvwxyz
buf.toString('ascii', 0, 5) // outputs: abcde buf.toString('utf8', 0, 5) // outputs: abcde buf.toString(undefined, 0, 5) // encoding defaults to 'utf8', outputs abcde
fs
? The default value of data
also a buffer: fs.readFile('/etc/passwd', function (err, data) { if (err) return console.error(err) console.log(data) });
data
acts as a buffer when working with files.cluster
module (you do not need to install it, it is part of the platform) we can use all the processor resources on any machine. In other words, thanks to clusters, we can vertically scale Node applications.cluster.isMaster()
, for the employee - cluster.isWorker()
. Most of the server code will be located in the worker ( isWorker()
). // cluster.js var cluster = require('cluster') if (cluster.isMaster) { for (var i = 0; i < numCPUs; i++) { cluster.fork() } } else if (cluster.isWorker) { // })
loadtest
Node:loadtest
npm: $ npm install -g loadtest
code/cluster.js
node ( $ node cluster.js
); .$ loadtest http://localhost:3000 -t 20 -c 10
.loadtest
.loadtest
-t 20 -c 10
, 10 20 .strong-cluster-control
(https://github.com/strongloop/strong-cluster-control) $ slc run
pm2
(https://github.com/Unitech/pm2)pm2
, Node-. , pm2
production.pm2
:server.js
. , isMaster()
, , cluster
. pid
. var express = require('express') var port = 3000 global.stats = {} console.log('worker (%s) is now listening to http://localhost:%s', process.pid, port) var app = express() app.get('*', function(req, res) { if (!global.stats[process.pid]) global.stats[process.pid] = 1 else global.stats[process.pid] += 1; var l ='cluser ' + process.pid + ' responded \n'; console.log(l, global.stats) res.status(200).send(l) }) app.listen(port)
pm2 start server.js
. /, ( -i 0
, , , 4). -l log.txt
: $ pm2 start server.js -i 0 -l ./log.txt
$ pm2 list
loadtest
, cluster
. : $ loadtest http://localhost:3000 -t 20 -c 10
log.txt
- : cluser 67415 responded { '67415': 4078 } cluser 67430 responded { '67430': 4155 } cluser 67404 responded { '67404': 4075 } cluser 67403 responded { '67403': 4054 }
cluter.js
Node- fork()
. , Node.js : spawn()
, fork()
exec()
. child_process
. :require('child_process').spawn()
: ; stream'; ; V8.require('child_process').fork()
: V8 ; Node.js ( node
).require('child_process').exec()
: , ; , callback'; , node
.node program.js
, — bash, Python, Ruby .. , , spawn()
. data
stream': var fs = require('fs') var process = require('child_process') var p = process.spawn('node', 'program.js') p.stdout.on('data', function(data)) { console.log('stdout: ' + data) })
node program.js
, data
, , .fork()
spawn()
, : , fork()
, Node.js: var fs = require('fs') var process = require('child_process') var p = process.fork('program.js') p.stdout.on('data', function(data)) { console.log('stdout: ' + data) })
exec()
. , , callback. error, standard output : var fs = require('fs') var process = require('child_process') var p = process.exec('node program.js', function (error, stdout, stderr) { if (error) console.log(error.code) })
error
stderr
, exec()
(, program.js
), — (, program.js
).try/catch
. . try { throw new Error('Fail!') } catch (e) { console.log('Custom Error: ' + e.message) }
setTimeout()
, callback'. , HTTP-, : try { setTimeout(function () { throw new Error('Fail!') }, Math.round(Math.random()*100)) } catch (e) { console.log('Custom Error: ' + e.message) }
try/catch
. , callback try/catch
, , . . try/catch
.error
. : callback' . if (error) return callback(error) // or if (error) return console.error(error)
uncaughtException
.domain
( ) AsyncWrap .on('error')
, Node.js, http
. error
, Express.js, LoopBack, Sails, Hapi .., http
. js server.on('error', function (err) { console.error(err) console.error(err) process.exit(1) })
uncaughtException
process
! uncaughtException
— . , — Node.js — . process.on('uncaughtException', function (err) { console.error('uncaughtException: ', err.message) console.error(err.stack) process.exit(1) })
process.addListener('uncaughtException', function (err) { console.error('uncaughtException: ', err.message) console.error(err.stack) process.exit(1)
domain
. Node.js . , . : domain
callback' run()
: var domain = require('domain').create() domain.on('error', function(error){ console.log(error) }) domain.run(function(){ throw new Error('Failed!') })
domain
, Node . Node domain
. , , domain
npm-, npm. domain
.setTimeout()
: // domain-async.js: var d = require('domain').create() d.on('error', function(e) { console.log('Custom Error: ' + e) }) d.run(function() { setTimeout(function () { throw new Error('Failed!') }, Math.round(Math.random()*100)) });
domain
error
“Custom Error”, Node .hello.cc
, . , . #include <node.h> namespace demo { using v8::FunctionCallbackInfo; using v8::HandleScope; using v8::Isolate; using v8::Local; using v8::Object; using v8::String; using v8::Value; void Method(const FunctionCallbackInfo<Value>& args) { Isolate* isolate = args.GetIsolate(); args.GetReturnValue().Set(String::NewFromUtf8(isolate, "capital one")); // String } void init(Local<Object> exports) { NODE_SET_METHOD(exports, "hello", Method); // Exporting } NODE_MODULE(addon, init) }
capital one
: args.GetReturnValue().Set(String::NewFromUtf8(isolate, "capital one"));
hello
: void init(Local<Object> exports) { NODE_SET_METHOD(exports, "hello", Method); }
hello.cc
, . binding.gyp
, : { "targets": [ { "target_name": "addon", "sources": [ "hello.cc" ] } ] }
binding.gyp
hello.cc
, node-gyp : $ npm install -g node-gyp
hello.cc
binding.gyp
, : $ node-gyp configure $ node-gyp build
build
. build/Release/
.node
. , Node.js hello.js
, C++: var addon = require('./build/Release/addon') console.log(addon.hello()) // 'capital one'
capital one
, : $ node hello.js
Source: https://habr.com/ru/post/283228/
All Articles