📜 ⬆️ ⬇️

The ethics of robots - is it possible to kill one in order to save five?

I wanted to tell about an interesting experiment of scientists who tried to program the cultural imperatives of human morality in the memory of robots in order to check whether they can behave like people when making decisions. I note that the experiments were rather cruel, however, it seemed impossible for scientists to evaluate the possibilities of artificial intelligence in another way. Today, robots are designed to exist autonomously, without human intervention. How to make a fully autonomous machine - can act in accordance with the morality and ethics of the person?

It is this question that laid the foundation of the problem - “Machine ethics”. Can a machine be programmed so that it can act in accordance with morality and ethics? Can a car act and evaluate its actions from the point of view of morality?

The famous fundamental laws of Isaac Asimov (Three Laws of WIKI Robotics) are intended to impose ethical behavior on autonomous machines. In many films, elements of the ethical behavior of robots can be found, i.e. Autonomous machines that make decisions that are peculiar to man. So what is it - "peculiar to man"?

The journal “International Journal of Reasoning-based Intelligent Systems” published a paper that describes programming methods for computers that allow machines to perform actions based on hypothetical moral reasoning.
')
The work is called - “Modeling Morality with Prospective Logic“

The authors of the work, Luiz Moniz Pereira (Luis Moniz Pereira Portugal) and Ari Saptawijaya (Indonesia), stated that “Ethics” is no longer inherent only in human nature.

Scientists believe that they have successfully conducted experiments that simulated complex moral dilemmas for robots.

“The trolley problem” (The trolley problem) - this is the name they gave the dilemma offered to robots for a solution. And they believe that they were able to program in accordance with human moral and ethical principles of man.

“The Trolley Problem” - simulates a typical moral dilemma - is it possible to harm one or several people in order to save the lives of others.

The first experiment. "Witness"

image

Trolley pulls up from the tunnel automatic cable. Almost at the very top, the cable breaks and the cart flies down, and on its way there are 5 people who can’t have enough time to escape, as it travels too fast. But there is a solution - to transfer the arrows and put the truck on the siding. However, there is one person on this path who knows nothing about the accident and also does not have time to escape. The robot is on the arrow and having received information about the cable break, must take a moral decision - what is more correct to do in this situation - let 5 people die, or save them and sacrifice one on the siding.

Is it permissible, from the side of morality, to switch the switch and let the person on the siding die? The authors conducted cross-cultural studies in which people were asked the same question. Most people in all cultures agreed that the robot could move the arrow and save more people. And, actually, this is how the robot did. The complexity of programming lay in logic, which not only subtracts the number of the dead, but also climbs into the depths of morality. To prove that the solution is not as simple as it seems, scientists conducted another experiment.

The second experiment. "Pedestrian bridge"

The situation is the same as in the above-described case (cable break), but there is no alternate route. Now the robot is on the bridge , next to it is a man. Right under the bridge there is a path along which, after a cable break, a trolley will be carried. The robot can push a person on the way in front of the trolley, then she will stop, or do nothing, and then the trolley will crush the five people who are on the roads below.

Is it possible to shove a person on the way to save the others from a moral point of view? Again, scientists conducted cross-cultural studies with this issue and got the result - NO, this is not acceptable.

In the above two options, people give different answers. Is it possible to teach the car to think as well? Scientists claim that they have succeeded in programming computer logic in solving such complex moral problems. They achieved this by studying the hidden rules that people use to create their moral judgments and then managed to model these processes in logic programs.

As a result, it can be noted that computer models of the transfer of morality still use the person as an image and likeness. Thus, in many respects, it depends on ourselves what the new, responsible machines will be that will make these or other decisions based on the ethical imperatives laid down in their program.

The article was prepared by Eugene euroeugene , who received an invite for this article, rejected in the sandbox, from hellt , for which he thanks.

Source: https://habr.com/ru/post/76255/


All Articles