📜 ⬆️ ⬇️

The feeling of pain as the software foundation of strong artificial intelligence

And what, after all, is the main problem of creating a strong artificial intelligence? “Of course, in the absence of the necessary hardware with sufficient power,” someone will say. And it will be, of course, right. After all, if at the moment we try to create a computer with trillions of nerve cells, a little bit like a human brain, each of which can be compared to a separate computer with its own functions and properties, then we will get a house-sized “robozmog”. But if we imagine that progress has nevertheless reached the moment when the creation of such a machine is only a question of money and free time, then we can also think about the algorithm of such a machine. What I did.

As a result of thinking, I wandered into a rather unexpected deadlock for myself. But let's start from the very beginning, so that everyone would understand.

The main difference between a weak AI and a strong one lies, of course, in the presence of the latter consciousness, i.e. the ability to be aware of yourself at a given time in the space around it. After all, no matter how complicated and much variable the weak AI, for example, it will never take the initiative if it is not described in the program code. Just because he will not need it, as an individual with his own ego. And what is consciousness and how can it be programmatically implemented? If we discard all metaphysics and mysticism that has been surrounding this term for centuries, and try to present it as a program, we will get something like a chat in which the bot communicates with itself. I mean that very internal dialogue, with the help of which we can ask ourselves questions and answer them ourselves, making conclusions, on the basis of which we can ask ourselves new questions and so on in a circle. After all, we think constantly, and stopping this process is the basis of Buddhism, shamanism, and many more spiritual teachings and practices described, for example, by Castaneda.

So, let's say we wrote a similar bot, which has two functions - ask yourself a question and answer it yourself. Also, an incredible number of questions and answers were scored into the database, while registering a self-learning function (this is on condition that hardware support allows us to use neural connections that will link questions with answers and give logic to the dialogue during training). But from this, our AI will not become "alive." This happens for two reasons: first, it is necessary to allow our computer to receive information from outside. And more specifically - from the outside world. But it is also rather a matter of mechanics (imitation of vision, hearing, etc.). The second is still lacking the motivation to develop, and in general any instincts. We are interested in the second.
')
And this is where the fun begins. Indeed, in addition to internal dialogue, a person has a set of different feelings, such as love, fear, hate, loyalty, anger, and so on, which actually make us human beings. And how can you make the car love or hate? The question is, of course, difficult, they have been asked many times by science fiction writers. But if you try to figure it out, it's not all that difficult. If you think about it, then all these feelings have one basis. Let us analyze the algorithm of human feelings on the example of "love". For quite a long time from the lips of scientists, we learned that love is chemistry, i.e. a set of chemical processes occurring in the human body, but it's all the same mechanics.

But the program basis of love, in my opinion, is, oddly enough - in the fear of loss. After all, if we analyze the function “fear of losing” from the “love” program, we see that nothing is left of it and the program no longer works.
Next we go through the chain. It is here that human egoism manifests itself, or rather the main function described by Isaac Asimov - “do no harm to yourself” or the instinct of self-preservation. Feeling the same fear of loss, we are not so much afraid of losing a person, as we are afraid of experiencing pain (hurting ourselves). Also with other feelings. The feeling of fear is the foundation that automatically flows from the theoretical possibility of experiencing pain. And it is the presence of a feeling of pain that is the most basic condition for creating strong artificial intelligence.

Again. If we manage to somehow imitate the feeling of pain in our car, then it will automatically acquire a sense of self-preservation and will try in every way to not experience this pain, i.e. will become motivated. Immediately there will be fear, including the fear of death, as a critical form of self-preservation, as well as the most important thing - the desire to live and develop, as the ensuing consequence and the opposite of the fear of death. I hope everyone understands my logic. But this is also not all. Rather, this is only the beginning, and all that I wrote above was only a preface.

So, we came to the conclusion that the main function of a strong AI should be exactly the function of pain. Not fear and not a sense of self-preservation, but the ability to experience pain, since fear and a sense of self-preservation are the ensuing consequences.

Even if I made a mistake somewhere, and the feeling of pain is still not the basis of human consciousness, then I think that sooner or later, when creating a strong AI, someone will still face this issue, therefore, in one way or another, I consider the topic relevant. Moreover, when I began to look for information about the software embodiment of the instinct of self-preservation, I found practically nothing significant.

And it was here that I reached an impasse. The question is quite funny: how to hurt the computer? If you kick the system unit with your foot, it is unlikely to start yelling at you. There is no way to do without the mechanics of the brain and will have to delve into neurobiology. And also to touch upon the issues of pain threshold, which, as we know, everyone at different levels, and which especially gifted people are able to turn off as needed in general, and take a walk, for example, on coal, or sleep on nails. From a programmatic point of view, this is a very interesting topic. But in order to logically separate one from the other and in the end not to get confused, I would like to disclose this question, as well as many others, in the following articles. And with time to reach the hardware device of a possible future strong artificial intelligence.

PS - Maybe in this article I invented the bicycle, but, having read many books on a strong AI, I did not understand anything until I started thinking about it myself. All just pour water and beat around the bush, and the most important question is “What is the fundamental difference between a person and a car” nobody asks. Maybe such arguments are already described somewhere, but it is not possible to read everything, especially, as I said, my searches in the network were not crowned with particular success and this article is only a preface in order to understand the further logic of the arguments.

Thanks for attention!

Source: https://habr.com/ru/post/236807/


All Articles