📜 ⬆️ ⬇️

Artificial Intelligence and the Web: Part 0

Hi Habr.



Having read what is written in Habré via neural networks, I wanted to tell about artificial intelligence in a simpler and more interesting language. The idea is, firstly, to write a series of articles on the basics of neural networks, and secondly, there are several ideas for interesting projects that combine the interactivity that is inherent in everything web-based and learning network of neural networks, but this later.


Introduction


')
In this article we will analyze the fundamental concepts, such as a neuron and an associative memory, and a large part of what is commonly called Artificial Intelligence is based on this.

Copy-paste lectures on neural networks and wikis will not, we will immediately move on to the case. Who is this very neuron, how is it mathematically described and why it is needed, I think everyone, with a due degree of interest, will read it himself. Next, I propose the simplest JavaScript implementation of a neural network consisting of a single neuron and implementing a conjunction (the operation, however, can be easily replaced by replacing the network learning matrix).

So here this class will implement all the functions of the neural network.
//Globals weights

var weights = new Array();
var defaultWeight = -1;
//Our small Neuron Class=)
with (NeuronClass = new Function){
prototype.tsum = 0;
prototype.prilly = 0;
prototype.view = '' ;
prototype.vec = new Array();
//Sum all inputs
prototype.sum = function (x){
this .tsum = 0;
for ( var k = 0; k < 4;k++) {
this .tsum += weights[k] * x[k];
}
this .tsum += defaultWeight;

if ( this .tsum < 0) {
return 0;
}
else {
return 1;
}

}
//Teach function
prototype.teach = function (i, d, k){
this .prilly = 0;
this .prilly = 0.1 * (1 + Math.sin(0.01 * k * d * i));
return this .prilly;
}
//Check job our neoron
prototype.check = function (vector){
this .vec = vector.split( ',' );
this .view += this .sum( this .vec);
$( "#out_2" ).html( 'Result: ' + this .view);
this .view = '' ;
}
}


* This source code was highlighted with Source Code Highlighter .

The code does not claim beauty, it was written on the knee, and not for the "perfect code." We will analyze the functionality.
The neuron has four inputs; a simple adder is implemented inside.
And implemented by the sum method, the output value of this method is “put on” the threshold activation function (all these scary words of all
if ( this .tsum < 0) {
return 0;
}
else {
return 1;
}
)


* This source code was highlighted with Source Code Highlighter .

The teach method implements network learning i. depending on the discrepancy between the value obtained from the neural network and the value obtained by the formula (obviously true), we change the weights of each input of our neuron.
And we need the check method after learning the network to check how it has learned.
Now we will conduct network training on the conjunction learning matrix.

var i, j, k, Yt, Yv, d, ms;
var biasing = new Array();
var x = new Array();
var values = new Array();
var view = '' ;
var Neuron = new NeuronClass();
check = function (vector){
Neuron.check(vector);
}
for (k = 0; k < 4; k++) {
weights[k] = Math.random();
biasing[k] = Math.random();
}
view += 'Start : ' + weights[0] + ' ' + weights[1] + ' ' + weights[2] + ' ' + weights[3] + '<br />' ;
i = 0;
while (i <= 200) {
j = Math.round(Math.random() * 10);
switch (j) {
case 1:{
x[0] = 1;
x[1] = 1;
x[2] = 0;
x[3] = 1;
Yv = 0;
break ;
}

case 2:{
x[0] = 1;
x[1] = 1;
x[2] = 1;
x[3] = 0;
Yv = 0;
break ;
}

case 3:{
x[0] = 1;
x[1] = 1;
x[2] = 1;
x[3] = 1;
Yv = 1;

break ;
}

case 4:{
x[0] = 1;
x[1] = 1;
x[2] = 0;
x[3] = 0;
Yv = 0;
break ;
}

case 5:{
x[0] = 1;
x[1] = 0;
x[2] = 1;
x[3] = 1;
Yv = 0;
break ;
}

case 6:{
x[0] = 1;
x[1] = 0;
x[2] = 1;
x[3] = 0;
Yv = 0;
break ;
}

case 7:{
x[0] = 1;
x[1] = 0;
x[2] = 0;
x[3] = 1;
Yv = 0;
break ;
}

case 8:{
x[0] = 1;
x[1] = 0;
x[2] = 0;
x[3] = 0;
Yv = 0;
break ;
}

case 9:{
x[0] = 0;
x[1] = 1;
x[2] = 1;
x[3] = 1;
Yv = 0;
break ;
}

case 10:{
x[0] = 0;
x[1] = 0;
x[2] = 0;
x[3] = 0;
Yv = 0;
break ;
}

}

Yt = Neuron.sum(x);
d = Yv - Yt;
for (k = 0; k < 4; k++)
values[k] = Neuron.teach(i, d, biasing[k]);
for (k = 0; k < 4; k++)
weights[k] = weights[k] + values[k] * d * x[k];
i++;
}
view += 'Stop : ' + weights[0] + ' ' + weights[1] + ' ' + weights[2] + ' ' + weights[3] + '<br />' ;
$( "#out" ).html(view);


* This source code was highlighted with Source Code Highlighter .


All that in switch is the learning matrix, i.e. we set the values ​​of the inputs and what should be the output, naturally, the training should not have all the options, otherwise why use the neural network at all to solve the problem (although in this case it is definitely not necessary to use it for performance reasons, but this is just an example the first).
Strings

Yt = Neuron.sum(x);
d = Yv - Yt;
for (k = 0; k < 4; k++)
values[k] = Neuron.teach(i, d, biasing[k]);
for (k = 0; k < 4; k++)
weights[k] = weights[k] + values[k] * d * x[k];


* This source code was highlighted with Source Code Highlighter .


They are trivial, but nevertheless, here we check how much the result “has gone” from the expected one and depending on this we call the learning function for each of the network inputs.

Next, we check how well the network has learned, it depends on many factors, ranging from the initial values ​​of the coefficients, which in this trivial case are simply chosen randomly, and to the number of iterations of training.

Example http://bnet.su/dvl/nn1/
Sources http://bnet.su/dvl/nn1/nn1.zip

Hopfield network


Now let's deal with the Hopfield network. This network implements auto-associative memory and is interesting to us from the point of view of the ability to restore samples. Everything is simple, having received a matrix of size n * n by a vector (n dimensional), we can apply a distorted vector to the “input” of the network and eventually get the original, this is a very useful network property, I think it’s not worth explaining where and how it can be used. The wiki has a lot of theoretical information about this, so we will not stop here, besides, we are pursuing other goals.

From words to code. The MemClass class implements in itself all the methods that we need to work with the network.

with (MemClass = new Function){
prototype.global_matrix = new Array();
prototype.sign = function (value){
return (parseFloat(value) > 0) ? '1' : '-1' ;
}
prototype.searchW = function (vector){
var vec = new Array();
var returned = new Array();
var tmp = new Array();
vec = vector.split( ',' );
this .ViewerW( this .getW( this .getTmp(vec))[1]);
}
prototype.getTmp = function (vec){
var tmp = new Array();
var count = 0;
count = vec.length;
for ( var i = 0; i < count; i++) {
tmp[i] = parseFloat(2 * vec[i] - 1);
}
return tmp;
}
prototype.getW = function (tmp){
var view = '' ;
var returned = new Array();
var count = 0;
count = tmp.length;
returned[0] = new Array();
for ( var i = 0; i < count; i++) {
for ( var j = 0; j < count; j++) {
//alert(returned[i]);
if (j == 0)
returned[i] = new Array();
returned[i][j] = parseFloat(tmp[i] * tmp[j]);
if (i == j)
returned[i][j]--;
if (returned[i][j] >= 0)
view += ' ' ;
view += returned[i][j];
}
view += '<br />' ;
}
this .global_matrix = returned;
//tmp
return Array(returned, view);
}
prototype.check = function (vector, j){
var sum = 0;
for ( var i = 0; i < vector.length; i++) {
sum = sum + parseFloat(vector[i]) * parseFloat( this .global_matrix[j][i]);
}
return sum;
}
prototype.checkMatrix = function (vector){
var view = '' ;
var vec = new Array();
vector = vector.split( ',' );
for ( var i = 0; i < vector.length; i++) {
vec[i] = this .sign( this .check(vector, i));
view += vec[i];
}
this .ViewerCheck(view);
prototype.ViewerW = function (matrix){
$( "#matrix" ).html(matrix);
$( "#form_second" ).css({
display: "block"
});
}
}
prototype.ViewerCheck = function (vector){
$( "#check_vector" ).html(vector);

}
prototype.ViewerW = function (view) {
$( "#matrix" ).html(view);
$( "#matrix" ).show( "drop" , {
direction: "right"
}, 500);
$( "#form_second" ).css({display: 'block' });

}


* This source code was highlighted with Source Code Highlighter .


We will understand what's what. When you enter a vector to memorize, the getW method is ultimately called, which implements the following function where I is the identity matrix of size n * n. At this, the mathematics does not end; now, when the vector is restored, the next operation trace is used (implemented by the checkMatrix method) . Well, that's almost everything, now we can remember which binary vector we are, and in every possible way changing it, understand and find when the network says “Foo!”.

Well, a little witchcraft with jQuery and you're done

$( function (){
$( '#form_first label' ).tooltip({
track: true ,
delay: 100,
showBody: '::' ,
opacity: 0.85,
bodyHandler: function (){
return ' , "1" "-1" ( ) ( =))' ;
}
});
$( '#form_second label' ).tooltip({
track: true ,
delay: 100,
showBody: '::' ,
opacity: 0.85,
bodyHandler: function (){
return ' , , ( )' ;
}
});
$( '#matrix' ).tooltip({
track: true ,
delay: 100,
showBody: '::' ,
opacity: 0.85,
bodyHandler: function (){
return ' ' ;
}
});
$( '#check_vector' ).tooltip({
track: true ,
delay: 100,
showBody: '::' ,
opacity: 0.85,
bodyHandler: function (){
return ' ' ;
}
});
});


* This source code was highlighted with Source Code Highlighter .


For greater clarity, we hang up tips, use the tooltip plugin for jQuery for this.

Example http://bnet.su/dvl/nn2/ .
Sources http://bnet.su/dvl/nn2/nn2.zip .

Conclusion


In this article, we reviewed the basics of the basics of neural networks, I hope that for mathematicians he spoke not too unfounded and unreasonably, but for programmers it was not too dry and boring. In the next article we will discuss the so-called "genetic" algorithms, well, actually about why the web neural network.
Mirror article in my blog http://bnet.su/blog/?p=30 .

Source: https://habr.com/ru/post/50368/


All Articles