📜 ⬆️ ⬇️

Will robots ever truly realize themselves? Scientists are moving in this direction.



At the heart of the concept of "man" is the ability to be aware of yourself. Without it, we could not navigate, interact, sympathize, or survive in an ever-changing, complex world with other people. We need to be aware of ourselves when we do something, or when we expect the consequences of potential actions from ourselves and others.

Given our desire to incorporate robots into our social world, it is not surprising that the creation of self-awareness in artificial intelligence is one of the main goals of researchers in this area. If these cars take care of us and make us a company, they will inevitably have the ability to put themselves in our place. And although scientists are still far from creating robots that recognize themselves as humans, they are gradually approaching this.

The new study , published in Science Robotics, describes the creation of a robotic arm that understands its physical shape — that is, it has the simplest version of self-awareness. However, this is a very important stage in the development of robotics.
')
There is no clear scientific explanation of the components of human identity. Research in the field of neuroscience says that the neocortex in its areas responsible for motor skills and in the parietal area of ​​the brain are activated in many cases that are not at all related to movement. For example, a person who has heard such words as “take” or “hit” activates the motor areas of the brain. As with the observation of the actions of another person.

On this basis, a hypothesis has arisen that we perceive the actions of other people as if we are acting ourselves - scientists call this phenomenon “embody simulation ”. In other words, we use our own ability to perform actions with our body to give meaning to the actions or goals of others. The simulation process is controlled by a mental model of the body or of itself. That is what the researchers are trying to reproduce in the machines.

Physical "I"


The team that conducted the study used a neural network with deep training to create a model of itself in a robotic arm using data obtained from its random movements. The AI ​​did not give any information about the geometric shape or physical properties of the hand, he learned gradually, moving and bumping into objects — something like the child knows himself, watching his hands.

Then the robot was able to use a model of itself, containing information about its shape, size and movement, to make predictions about actions - for example, lifting something with a tool. When scientists made physical changes in the hand, the contradictions between the predictions of the robot and reality forced the learning loop to restart, which allowed the robot to adapt the model to a new body shape.



And although one hand was used in the study, similar models related to the process of self-study are also being developed for humanoid robots, under the impression of research in developmental psychology.

Full "I"


And all the LCD robotic identity can not be put on a par with human. Our “me”, like an onion, has many mysterious layers . This includes the ability to identify oneself with the body, with being in the physical boundaries of this body and to feel the world from its visual-spatial perspective. But it also includes other processes that go beyond this, including the integration of information from the senses, the continuity of time with the help of memories, the creation and awareness of one’s own actions and privacy.

And although the path to creating a robotic self-awareness, which includes all this multitude of levels, has just begun, such building blocks are already being created as building a body pattern in a new study. You can also make machines so that they imitate others and predict the intentions of others, or change their mind under the influence of circumstances. Such developments, as well as the growth of episodic memory, are also important steps towards the creation of socially-oriented robotic components.

Interestingly, this research can also help us learn more about human self-awareness. We know that robots are able to adapt their physical model when we change the configuration of their bodies. This can be represented in another way, as a situation similar to the use of animal tools, when external objects are combined with the body.

The images of the brain show that the monkeys' neurons, which are activated while grabbing, are also activated when they take objects with the help of forceps, as if the forceps become their fingers. The instrument becomes part of the body, and the sense of self changes. This is similar to how we identify ourselves with the avatar on the screen during video games.

An intriguing idea proposed by the Japanese neuroscientist Atsushi Iriki is that the ability to supplement your body with external objects and the ability to perceive other bodies as tools are two sides of the same coin . Interestingly, this blurry distinction requires the emergence of a virtual concept of “self” that holds together the subject / person and objects / tools. Therefore, how we adjust ourselves by adding or removing tools can help us understand how “I” works.

Robots that learn how to use tools as a continuation of their bodies are a fruitful field for experimentation, which allows them to confirm emerging data and theories from the fields of neurobiology and psychology. At the same time, research will lead to the development of smarter, more capable machines working for us and with us in different fields.

This is perhaps the most important aspect of the new study. It combines psychology, neuroscience and robotics in order to answer one of the most fundamental questions of science: who am I?

Source: https://habr.com/ru/post/445260/


All Articles