📜 ⬆️ ⬇️

Code of honor for combat robots

Computer Tech professor Ronald Arkin from Georgia Tech created the world's first software package with a set of ethical rules for combat robots. The program will help you decide when to kill ethically, and when - no. That is, ideally, robots should act as human fighters who are guided by the code of military honor.

Fighting robots are now being adopted by many armies in the world. In the same Iraq, there are hundreds of similar machines. Basically, these are drones (unmanned aerial vehicles). They are controlled by operators, but in the long term it is necessary to transfer drones to automatic operation, this will greatly increase their efficiency.

Arkin’s program will make it possible to more intelligently choose the type of weapon from which the defeat is carried out (for example, if there is a risk of damage to nearby structures of cultural value), and also allows an objective assessment of the environment, which soldiers-people are not always capable of in a fever.
')
The results of his work, Professor Arkin published in the document “Governing Lethal Behavior in Autonomous Robots” ( PDF ).

In fact, the first set of ethical rules for robots was formulated by Isaac Asimov in his famous laws of robotics. The first law was: “A robot cannot harm a person or, through its inaction, allow a person to be harmed”. As we see, Azimov could not even imagine that the combat robots would receive such a distribution in the future. That is, its laws are completely unacceptable for modern UAVs with rockets or tracked robot with machine guns, whose murder is the main task.

Source: https://habr.com/ru/post/60064/


All Articles