📜 ⬆️ ⬇️

Javascript Neural Networks

image
The idea for writing this article came up last summer, when I listened to a talk at the BigData conference on neural networks. The lecturer “sprinkled” the listeners with the unusual words “neuron”, “training sample”, “train the model” ... “I didn’t understand anything - it's time for managers,” I thought. But recently the topic of neural networks nevertheless touched my work and I decided on a simple example to show how to use this tool in JavaScript.

We will create a neural network with which we will recognize the manual writing of numbers from 0 to 9. The working example will take several lines. The code will be clear even to those programmers who have not dealt with neural networks before. How it all works, it will be possible to see directly in the browser.


If you already know what Perceptron is, you should skip the next chapter.
')

Quite a bit of theory


Neural networks have arisen from research in the field of artificial intelligence, namely, from attempts to reproduce the ability of biological nervous systems to learn and correct errors, simulating a low-level structure of the brain. In the simplest case, it consists of several interconnected neurons.

Math neuron


A simple machine that converts input signals into a resultant output signal.
Mathematical neuron circuit
Signals x1, x2, x3… xn, acting on the input, are converted in a linear fashion, i.e. the body of the neuron receives forces: w1x1, w2x2, w3x3 ... wnxn, where wi are the weights of the corresponding signals. The neuron sums these signals, then applies a certain function f (x) to the sum and outputs the output signal y.
As a function of f (x), sigmoid or threshold functions are most often used.
sigmoid and threshold functions

The threshold function can take only two discrete values ​​of 0 or 1. A change in the value of a function occurs when going over a given threshold T. +

Sigmoid is a continuous function, it can take infinitely many values ​​in the range from 0 to 1.

UPD: The comments also mention the ReLU and MaxOut functions as more modern.

The architecture of the neural network may be different , we consider one of the simple implementations of the neural network - Perceptron

Perceptron Architecture


Perceptron Architecture

There is a layer of input neurons (where information comes from outside), a layer of output neurons (where you can get the result) and a series of so-called hidden layers between them. Neurons can be located in several layers. Each connection between neurons has its weight Wij

Input and output signals


Before sending signals to the neurons of the incoming layer of the network, we need to normalize them. Normalization of input data is a process in which all input data undergo a process of “alignment”, i.e. reduction to the interval [0,1] or [-1,1]. If you do not carry out normalization, the input data will have an additional impact on the neuron, which will lead to wrong decisions. In other words, how can we compare values ​​of different orders?

On the neurons of the output layer, we also will not have a clean "1" or "0", this is normal. There is a certain threshold at which we will assume that we have received “1” or “0”. We'll talk about the interpretation of the results later.

"An example in the studio, but I already fall asleep"


For convenience, I recommend putting nodejs and npm to myself.

We will describe the network using the Brain.js library. At the end of the article, I will also provide links to other libraries that can be configured in a similar way. I liked Brain.js for its speed and ability to save a trained model.

Let's try an example from the documentation - the XOR function emulator:
var brain = require('brain.js'); var net = new brain.NeuralNetwork(); net.train([{input: [0, 0], output: [0]}, {input: [0, 1], output: [1]}, {input: [1, 0], output: [1]}, {input: [1, 1], output: [0]}]); var output = net.run([1, 0]); // [0.987] console.log(output); 

we will write everything into the file simple1.js , so that the example will work, put the brain module and run
 npm install brain.js node simple1.js # [ 0.9331839217737243 ] 


We have 2 incoming neurons and one neuron at the output, the brain.js library will configure the hidden layer itself and install as many neurons there as it sees fit (in this example, 3 neurons).

What we passed to the .train method is called a training sample, each element of which consists of an array of objects with an input and output property (an array of input and output parameters). We did not normalize the incoming data, since the data itself is already in the required form.

Please note: at the output we get not [0.987] but [0.9331 ...] . You may have a slightly different meaning. This is normal, since the learning algorithm uses random numbers when selecting weights.

The .run method is used to receive the neural network response to the array of incoming signals specified in the argument.

Other simple examples can be found in the brain documentation.

Recognize the numbers


In the beginning, we need to get images with handwritten numbers that are the same size. In our example, we will use the MNIST digits module, a set of 28x28px thousand binary images of handwritten digits from 0 to 9:
nmist training sample

The original MNIST database contains 60,000 examples for training and 10,000 examples for testing; it can be downloaded from the LeCun website. The author of MNIST digits made some of these examples available for the JavaScript language, the library has already normalized incoming signals. With the help of this module, we can receive training and test samples automatically.

I had to clone the MNIST digits library, since there is some confusion with the data. I reloaded 10,000 examples from the original database, so I need to use the MNIST digits from my repository .

Network configuration


In the input layer, we need 28x28 = 784 neurons, at the output of 10 neurons. The hidden layer brain.js will configure itself. Looking ahead, I will clarify: there will be 392 neurons. The training set will be generated by the mnist module.

We train model


Install mnist
 npm install https://github.com/ApelSYN/mnist 


Everything is ready, we train the network
 const brain = require('brain.js'); var net = new brain.NeuralNetwork(); const fs = require('fs'); const mnist = require('mnist'); const set = mnist.set(1000, 0); const trainingSet = set.training; net.train(trainingSet, { errorThresh: 0.005, // error threshold to reach iterations: 20000, // maximum training iterations log: true, // console.log() progress periodically logPeriod: 1, // number of iterations between logging learningRate: 0.3 // learning rate } ); let wstream = fs.createWriteStream('./data/mnistTrain.json'); wstream.write(JSON.stringify(net.toJSON(),null,2)); wstream.end(); console.log('MNIST dataset with Brain.js train done.') 

We create a network, we get 1000 elements of the training set, we call the .train method, which produces network training - we save everything into the './data/mnistTrain.json' file (do not forget to create the "./data" folder).

If everything is done correctly, you will get something like this:
 [root@HomeWebServer nn]# node train.js iterations: 0 training error: 0.060402555338691676 iterations: 1 training error: 0.02802436102035996 iterations: 2 training error: 0.020358600820106914 iterations: 3 training error: 0.0159533285799183 iterations: 4 training error: 0.012557029942873513 iterations: 5 training error: 0.010245175822114688 iterations: 6 training error: 0.008218147206099617 iterations: 7 training error: 0.006798613211310184 iterations: 8 training error: 0.005629051609641436 iterations: 9 training error: 0.004910207736789503 MNIST dataset with Brain.js train done. 


Everything can be recognized


It remains to write quite a bit of code - and the recognition system is ready!
 const brain = require('brain.js'), mnist = require('mnist'); var net = new brain.NeuralNetwork(); const set = mnist.set(0, 1); const testSet = set.test; net.fromJSON(require('./data/mnistTrain')); var output = net.run(testSet[0].input); console.log(testSet[0].output); console.log(output); 


We get 1 random test case from a sample of 10,000 records, load the previously trained model, transfer the test record to the network input and see if it was recognized correctly.

Here is an example run
 [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0 ] [ 0.0002863506627761867, 0.00002389940760904011, 0.00039954062883041345, 0.9910109896013567, 7.562879202664903e-7, 0.0038756598319246837, 0.000016752919557362786, 0.0007205981595354964, 0.13699517762991756, 0.0011053963693377692 ] 

In the example, a digitized triple arrived at the incoming neurons (the first array is the perfect answer), at the output of the network we received an array of elements, one of which is close to one (0.9910109896013567) is also the third bit. Pay attention to the fourth bit there 7.56 ... to -7 degrees, this is the form of writing floating-point numbers in JavaScript.

Well, recognition passed correctly. Congratulations, our network has earned!

We will slightly "comb" our results with the softmax function, which I took from one example of machine learning:
 function softmax(output) { var maximum = output.reduce(function(p,c) { return p>c ? p : c; }); var nominators = output.map(function(e) { return Math.exp(e - maximum); }); var denominator = nominators.reduce(function (p, c) { return p + c; }); var softmax = nominators.map(function(e) { return e / denominator; }); var maxIndex = 0; softmax.reduce(function(p,c,i){if(p<c) {maxIndex=i; return c;} else return p;}); var result = []; for (var i=0; i<output.length; i++) { if (i==maxIndex) result.push(1); else result.push(0); } return result; } 


The function can be placed at the beginning of our code and replace the last line with
 console.log(softmax(output)); 


All friends - now everything works beautifully:
 [root@HomeWebServer nn]# node simpleRecognize.js [ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ] [ 0, 0, 1, 0, 0, 0, 0, 0, 0, 0 ] [root@HomeWebServer nn]# node simpleRecognize.js [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ] [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 1 ] [root@HomeWebServer nn]# node simpleRecognize.js [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ] [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0 ] 


Sometimes the network can give the wrong result (we took a small sample and set a not quite strict error).

And how to recognize the number that you write?


Of course, there is no fraud here, but still I want to check for "strength" what happened.

Using HTML5 Canvas and still the same brain.js with the saved model, I managed to make the recognition implementation in the browser, I borrowed some of the code for the drawing and the interface design on the Internet. You can try it live . In a mobile device, you can draw with your finger.

Related Links




UPD: Alternative implementations of live example 1 , 2 on JavaScript from comments and personal correspondence.

Source: https://habr.com/ru/post/304414/


All Articles