When I talk about our work on artificial intelligence, I am sometimes asked what I think about the moral problems of creating artificial intelligence and transferring the human mind to electronic media.
How will artificial intelligence be aware of itself? Will the human mind, transferred to an electronic carrier, have a “amputated body”? Is it humane to make inhuman experiments on artificial intelligence or electronic copies of real people?
So, I believe that the suffering of the artificial mind is complete nonsense.

Artificial intelligence can more or less accurately imitate thoughts or feelings, but it does not feel the thoughts or feelings themselves.
')
Any program, including artificial intelligence, does only one thing — it reads bits from a memory cell, carries out a logical operation with them, and writes the result to a memory cell. Everything! And already a person interprets the results of reading and writing as a result of the program's action: the calculation of the value of a function, the execution of a banking operation, or the manifestation of feelings of artificial intelligence.
It is possible that someone misleads the fact that AI thinks, feels and reacts to external influences in real time and plausibly: tell an anecdote - it laughs like a person, “hit the head” - it will cry, etc.
However, AI does not necessarily have to be performed on a computer in real time. Like any program, it can be perfectly executed in a notebook.
I repeat: the AI, like any other program, consists of instructions: read data from a memory cell, perform a logical operation on it, write data to a memory cell. Imagine that instead of a silicon RAM, we will have a notebook in the cell, and instead of a computer - a lab technician
[1] .

The laboratory assistant according to the instructions (i.e., according to the text of the AI ​​program) stupidly reads the necessary cells in the notebook, performs some manipulations, and shading the cells, again reads and shading ... The artificial intelligence begins to "live."
Then we give the lab a piece of paper with cells, meaning terrible mockery of our poor artificial intelligence: it is fried in a griddle, hung by an edge, forced to fill out a tax return or typesetting Internet Explorer 5 ... you can continue to measure your own perversity :-).
The laboratory assistant reads and shading cells, which means terrible agony of AI, convulsions and screams.
Now the question is: who is suffering?
Laboratory assistant? Yes, he does not understand what these cells mean. He suffers only from the fact that he is forced to perform idiotic work for some 3,000 rubles a month.
Notebook? Well, as they say, paper can endure everything ...
A certain system of "laboratory - instruction - notebook"? This is too tricky ...
So, we made sure that the AI ​​does not think and does not feel. You're happy? What's wrong again?

But after all, the human brain can also be represented as a notebook in a cell: a neuron receives a signal from neighboring neurons, performs certain actions on it and transmits to other neurons.
But I do not agree with the fact that I have no feelings!
People who believe in religion, soul, etc., would say that a person has a soul that feels, and the brain only serves to connect the soul with the body.
Unbelievers would say that we simply do not yet know enough about the work of the brain.
It’s not for me to judge which of them is right. However, it can be said for sure that the AI ​​created as an ordinary computer program cannot think and feel.
Notes
- ↑ This is a bit like the “ Chinese room ” thought experiment proposed by John Searle;
UPD. Thanks to habranoda for interesting discussion of the problem. Put a plus sign in profile to all who expressed their views on the issue, incl. and those with whom I disagree :-)