📜 ⬆️ ⬇️

The history of Artificial Intelligence, part 2. Neural network AI - inevitable or impossible?

When I was just thinking of writing this article, I only knew about neural networks that they seem to copy the thinking process of our brain. I did not know how wrong I was then.
At a time when cybernetics were just starting to play their toys, other more serious scientists were working on a more serious problem. Based on neurophysiological data on the structure of neurons, cells of our brain, they tried to recreate their structure. It was a few years before the very workshop where they first spoke about AI.
These serious scientists were one hundred percent sure that the only thing that can think is our brain. Therefore, everything that should have such an ability should reproduce the structure of our brain. A bold statement, especially if we take into account the fact that they had a rather vague idea about the processes of thinking, although no - not an idea, but just a hypothesis that a person’s thinking works through his own neural networks.
In other words, they, knowing the structure of a neuron, created their simplified copy of it and, knowing how neurons are connected in a network, tried to develop logic on the basis of this knowledge, and then - a computer. And only mathematicians are already working here. Nobody says that a person uses his neurons in the same way as a neurocomputer. It's just that engineers have replaced classical semiconductors with artificial neurons and are trying to build new logic. Nobody thinks about the psychological side of the process.
The principle of operation of neural networks is somewhat different from semiconductor chips - the first do not use binary codes. But the secret weapon of neural networks is that neural networks are rumored to be able to learn. Let's try to understand this issue.
So, we have a network (single-layer or multi-layer), consisting of simplified models of neurons. There is a group of input neurons and, accordingly, a group of weekends. When submitting information to the input, the signals run through the network, amplifying and weakening, from the neuron to the neuron. They can go directly from the input to the output, but they can also make loops, going back a few neurons back. It may even randomly move across the entire network, but the point is that a properly processed signal is formed at the outputs.
If the inputs were applied to the quadratic equation coefficients, then we should get its solutions at the output. If the input image of handwriting, then the output should form its recognized version.
At the beginning of work, when the neural network is not yet configured, there is no sense from it. In addition to its mysterious name, it is also useful in solving problems, it must be trained. This process begins with the fact that the problem is solved with the help of some external means. This is necessary in order to get a set of questions and correct answers. Further, these questions are submitted to the input, and the answers to the output.
Consider how this process takes place for a single neuron. Input signals come to its inputs, and the output is a signal that is known in advance that it is correct. The neuron, in turn, generates its output signal by summing the input and applying a special function to the sum.
As a result, the neuron's own output signal at the first stage is different from the external one. If a discrepancy is observed, then special algorithms are applied that change the level of the input signals (their weight) until both output signals are close.
But here you need to be careful, because if they are very close, then the neural network will “retrain”. And then, if its task consists of, for example, text recognition, the neural network will be able to recognize only the handwriting of a particular person, but not the others. And if these two signals are very different, then the neural network will “not learn”. That is, no one can recognize the handwriting at all.
To prevent this from happening, the developer, who is engaged in training, sets a kind of tuning error. If the difference between the output signals does not exceed the value of this error, then the learning process can be considered completed.
Something like this ... In fact, everything is much more complicated, since there are many neurons in the network. Networks are different, and no one yet knows what it really should be, moreover, no one knows how many neurons should be in it.
No matter how complicated and unimaginable it all seemed, but the first neural network computers were built 50 years ago. And although they were not very powerful, they worked. The main differences between such computers and ordinary computers are self-study (although I would prefer the term “self-tuning”) and parallel computing.
It is because of this that they place such high hopes on them, since our ordinary processor computers do all the calculations consistently. Parallel computing gives a new impetus to the development of search programs, cryptanalysis and other areas related to the search of a large amount of information.
Now neural networks are used to recognize patterns, text, and speech. There are even those on the basis of which entire software systems are built that replace traders on the exchange. Expansion cards are available with neural networks, which can then be inserted into a regular computer and work with them with the necessary software. They do an excellent job with their tasks, although they are still in the process of improvement. But is it possible to create artificial intelligence on their basis?
Yes, the neural network is capable of self-learning, but for this you need to know in advance all the answers. Trial and error is not a ride. So self-study can only be called a stretch. It can be customized to solve a specific task. She will then cope with it, but more with nothing. To retrain it for new goals, you need a person, and not just a person, but an expert in this new area. The neural network itself is not capable of learning anything.
The key to neurocybernetics was that they were trying to recreate the brain with its thinking processes. But the only thing they achieved was the fact that they partially replicated the structure of only one type of its cells - neurons. At the same time, they screwed their own algorithms to all of this, which have nothing to do with our thinking.
As a result, we get all the same cybernetics of the “black box”, with the entrance, exit, and devils than inside. The process of our thinking has nothing to do with the work of the neural network, since neural networks do not use analysis, synthesis, comparison, deduction. Also, they do not take into account such factors as the human psyche. When recreating neurons, their properties associated with all of this are omitted, therefore the process is implemented only at the surface level.
The creators of neural networks, who, in addition to logic, are still trying to build some kind of “programming language” based on human psychology, are faced with a serious problem. They are trying to find a general theory of psychology that would be easy to formalize. But the fact is that there are dozens of approaches here, and what is even more interesting is the fact that not one of them refers to physiology, that is, our neural networks are the engine of our psyche. And what if there is no such connection?
Theoretical psychology is an independent system with its own elements. It is in no way tied to physiology, therefore this “programming language”, this formalization of our psychology, can be implemented both on neural networks and on another platform, or it is generally described mathematically. Therefore, neural networks are not so indispensable, since you can do without them.
It is time to return to the question that I asked at the end of the first part, namely the question of whether the existing definition of Artificial Intelligence is so good. It seems like all the component parts have already been created, but this Frankenstein is still missing something. There must be something that breathes life into it.
The main thing here is to understand why we need Artificial Intelligence. Solve logical problems? For this, you can already use a regular computer. Recognize images or speech? Such technologies already exist. Maybe there is some other task that is quite complicated? I think that there will be some kind of technology or development for it. Any of our problems can be solved separately and without the involvement of artificial intelligence. Then why do we need it?
I suspect that it is not for any particular task. Let's not beat around the bush and fool ourselves. Let's face it, we want Artificial Intelligence to be as close as possible to human intelligence.
So that he was just as illogical, possessed an intuition, the ability to generate ideas, make decisions, feel, empathize, so that when communicating with him there was a feeling that you were communicating with a person. So that he has his own view of the world, so that he can argue and agree or disagree with his opponent. So that he is attentive, has the ability to make friends, keep secrets, lie, respect and dislike, so that he has a sense of humor. So that he could love.
It seems to me that this definition is closer to the point.

UPD: Judging by the comments, it became clear that the name does not reflect the essence, so it had to be changed.

Table of contents:
')
History of Artificial Intelligence, part 1. Painting without an artist.

The history of Artificial Intelligence, part 2. Neural network AI - inevitable or impossible?

Making Artificial Intelligence

Source: https://habr.com/ru/post/21983/


All Articles