📜 ⬆️ ⬇️

Smart contracts for robots and artificial intelligence

pic by ** Cheylin **


Worldwide, dozens of articles are constantly published on the need to create legislation for robots and artificial intelligence (AI). According to one of the leading legal thinkers in the new area of ​​rights, Professor Ono Academic College (Israel) Gabriel Hallevy: “ Today we are in a vacuum - a legal vacuum. We do not know how to relate to these creatures . ” And most recently, Bill Gates himself said that since robots are beginning to take people's jobs, they have to pay taxes.
According to the research of Ryan Keilo , in American law, the robot is interpreted as a programmed machine that performs the will of man. Therefore, in all cases, it is the creators who are responsible for the actions of the robot. This approach does not cause controversy until autonomous robotic systems are not widespread. But what if we are talking about, for example, the Tesla plant , which employs 160 robots of all sorts? For any emergency, the responsibility can be hanged on the developer programmer, the supplier of the company, the main shop floor and so on.


On all continents, controversy is raging how to get out of the situation. If we ignore the extremist appeals to extend the current administrative and criminal law to robots and punish them until dismantling, there remain several approaches. Some suggest that guilty robots be taken from the owners and transferred to perform socially useful work. Others, who are more cautious, see a way out in compulsory registration of robots, with their subsequent insurance for compensation for damage.


With all the variety of approaches to the law for robots and AI, the question remains: who exactly will be responsible if individuals or corporations are harmed by the robot or AI? There are three unsolved problems that impede the practical development of legislation for robots in terms of determining responsibility:


The first problem . Robotic systems, controlled by AI and capable of learning, are very complex stand-alone devices. A large number of people and companies participate in their creation and operation. Among lawyers, this problem is known as the problem of a long chain. As a rule, each robot and AI have different corporations producing hardware and software. Moreover, in complex systems, manufacturers of hardware and software are also not one, but several companies and single developers. Do not forget about the providers that provide telecommunications. Often, complex robotic systems are tied to the Internet of things . However, this is not all. There are also organizations that purchase and use these robots. So the length of the chain comes to 12-15 persons.


The second problem. Real life is different from games (like not only chess and checkers, but also, for example, poker) non-deterministic character. The context and characteristics of the situation play a huge role in life. Depending on the situation, the question of responsibility, culpability, etc. is resolved in different ways. In law for people, this context is taken into account through the jury. It is the jury who passes the verdict, trying on the laws and precedents to the context of a specific situation.


The third problem. In practice, both today and in the near future, complex robotic complexes will be completely autonomous only in a small number of cases. This is partly due to the position of state institutions and public opinion. Therefore, a significant number of creators and operators of robotic systems controlled by AI, rely on hybrid intelligence - the joint work of man and machine. Accordingly, the human-machine interaction protocol must be written into the legislation for robots. As practice shows, it is man in many systems that is the most vulnerable link.


In addition, this problem has another side. The main concerns associated with the use of autonomous robotic systems are their intentional or unintentional damage to living creatures. In the case of deliberate damage, the situation is clear: we must look for cybercriminals. In the case of involuntary harm - the situation is not so clear. Based on the history of human interaction with technology, it is safe to say that in most future troubles with robots people who violate safety technology and various rules will be at fault.


With all the fierceness of discussions about precedents for the formation of administrative and, possibly, criminal law for robots and AI, the key issue of determining responsibility is not given due attention. It is possible to argue long about the need to punish a crime, but, until there is no clear and accepted by society, corporations and states method of determining responsibility for crimes and punishable actions, the debate will be theoretical.


In the development of proposals for the creation of legislation for robots and AI, the mainstream is the desire to use for robots legal solutions and norms that apply to humans. The reverse situation has developed in the subject of " smart contracts ". Here, flexible contextual law is attempted to be replaced by algorithmic procedures. But rigid algorithms have few chances to replace the flexible and contextual legislation used by individuals and companies.


In life, as well as in arithmetic, you can get a plus out of two minuses. Smart contracts based on the blockchain are an ideal tool for solving problems of establishing and dividing responsibilities within the framework of law for robots and AI. Being essentially a cryptographically protected distributed database, the blockchain is suitable as the basis of legislation for robots and AI.


Autonomous automated systems controlled by AI, despite their complexity and versatility, remain algorithmic devices. The interaction between various software and hardware blocks of complex construction is best recorded and executed through the blockchain.


With this approach, smart contracts act as a legal module in any complex robotic complex, including managed AI, defining the scope and limit of responsibility for all involved in the creation and operation of this complex.


Smart contracts for robots can simultaneously perform at least three functions:



Any rules are hard-coded algorithms. They involve the implementation of strictly defined actions in strictly defined situations. Therefore, the blockchain is the best suited for the conclusion of smart contracts between manufacturers of complex autonomous systems on the one hand, and with their users - on the other. Working with autonomous systems, people should not only receive opportunities that did not exist before, but also bear responsibility for their own actions, written in algorithmic language. In the event of incidents, the presence of a smart contract, together with the readings of sensors and sensors, will allow us to establish who exactly the automated system or the person is to blame for what happened.


Things are easy. Leave people to people and machines to machines, and translate standards, technical rules, safety regulations, etc. the language of smart contracts for robots and people interacting with them.


')

Source: https://habr.com/ru/post/401885/


All Articles