The ecosystem of the gadgets around us is getting smarter. Despite the obvious difficulties with creating a full-fledged artificial intelligence, success in this field is quite noticeable. And it already scares someone. Increasingly, we hear from various experts that we are probably waiting for the uprising machines. Even Stephen Hawking, impressed with the abilities of his new electronic assistant to predict what he wanted to say, did not escape this fad. Others reasonably object to such fears that computers need to become not just smart, but also inventive. And to create something new, without compiling already known elements, computers are not yet capable even at the most primitive level. So you can not worry about the scripts a la "The Matrix" and "Terminator", in the next many decades, we are definitely not threatened.
On the other hand, smart machines, robots of all kinds, will actively penetrate our lives, and the trends are obvious. First, unpretentious cleaning robots that are able to choose their own cleaning program and build their own route. Then, voice assistants of varying degrees of intelligence, who understand our speech and even some emotions, penetrated our smartphones. And regardless of whether it pleases you or scares, the development of robotics and artificial intelligence will eventually raise a number of moral and ethical, social and legislative dilemmas. For example, is it right for the police to use drones equipped with tasers? Or should humanoid robots, extremely similar to humans, have any rights?
Peace robots
The hottest topic today is the use of combat drones. The “instigators”, as usual, have become the military, who have been using them for reconnaissance and strikes for more than a year. Alas, their example turned out to be contagious, and many “civilian” security agencies are now also eager to use remotely controlled aircraft. And it is very important today to discuss in society who and how can use drones. ')
It seems to me that this question is very important, because the lack of control and impunity of using drones with “punitive” goals will not lead to anything good. Do you want to see flying robots constantly hanging over your head, which keep an eye on you and look at them and shoot something? Tips like "a law-abiding person has nothing to fear" were invented by those who, as a matter of duty, are supposed to "keep an eye on order". Undoubtedly, the experience of the South African company Desert Wolf, which created the Skunk Riot Control Copter drone for spraying pepper gas, has been carefully studied by security forces around the world.
Alas, South Africans are far from alone in their search for new “professions” for drones. For example, the Texas-based company Chaos Moon created a prototype drone armed with a taser. The car received the cynical name CUPID [Cupid, Cupid], Chaos Unmanned Personal Intercept Drone.
In fact, the police and security services in our country already use drones for surveillance, and the range of their equipment is certainly not limited to video cameras and thermal imagers alone.
Upbringing and education
The emergence of such AI-based services as Siri and Julie , for the first time in human history, allowed us to communicate with machines. However, this may lead to quite unexpected consequences for ourselves. It's one thing to somehow teach a computer to understand human speech, and quite another to inculcate good manners in it. We ourselves learn the art of communication by contacting various people from the surrounding society. It is examples of other people's behavior that educate us, shape our personality and value system. When we are surrounded by all sorts of robots with extremely limited opportunities for full-fledged communication, how will this affect our social skills? Having become accustomed to unceremonious communication with not-too-smart machines, it will be increasingly difficult for us to switch to the mode of communication with our own kind, and this will inevitably have an effect on reducing the general level of education and upbringing. Which already do not show growth in recent years.
On the other hand, this fear may simply be another “wringing of hands”. Every time there is a new unusual and unusual technology, they begin to announce the decline of morals, that we all will die, and in general it is a complete bad taste. This is the usual feeling of fear of a new, incomprehensible and completely unnecessary harmful.
Of course, to entrust a robot with the upbringing and care of a child would not be the wisest decision, simply because children learn by watching the adults around them. What can a car teach them? On the other hand, if this machine is used for educational purposes, it can be beneficial. Creating nurse nurses for the elderly and the sick would also not be the best idea from the point of view of humanity: all people need communication, especially those who are vulnerable. And when a soulless piece of iron is caring for you, and you can't spread a word with him humanly, from this someone you want to cover depression.
Personal affection
Numerous experiments and observations of people interacting with robots in one way or another show the following tendency: the more a person looks like a person, the stronger the attachment that a person can have. This is a consequence of our propensity to project, to endow with human traits. We are happy to give names and find signs of characters in cars, computers and other equipment, more than far from anthropomorphic images. And then a whole robot, with a head, arms, and sometimes even legs. Well, how can you not feel responsible for the tamed? Not to mention many other emotions about the car.
It is possible to cite as an example a study carried out among soldiers using robots for mine clearance. It turned out that the robots were often given names, and in the event of their destruction or failure, they even conducted a funeral ceremony. In interviews, the soldiers confessed that they perceived these robots as true comrades, were upset and angry when they "died."
With the development of robotics, more and more importance will be given to the influence of the appearance and behavior of robots on people's attitude towards them, on the emotions we experience. After all, it will directly affect the level of sales. For example, at the Massachusetts Institute of Technology, experiments were conducted in which they asked the subjects to "hurt" or "kill" the robot by striking it with some object. The vast majority of people experienced discomfort from this, did not want to do it . Such research will also be important when creating robots for which people should be least affected by affection. For example, for military use, because the soldiers already have something to worry about and worry about, and then the beloved “pet” was blown up or shot down.
Nuances of legislation
The more complex, multifunctional and humanoid the robots become, the more unpredictable the moral dilemmas and ambiguous situations associated with them. Already, the list of all kinds of uncomfortable questions about “robots” is very extensive. Should robots be entitled to protection ... from abuse? I am sure that the very possibility of answering “yes” is absurd for you. However, for all its provocativeness this may make sense. The fact is that the consequences of the social involvement of humanoid robots can manifest themselves in yet another non-obvious way. If in no way blame and prevent bad, aggressive behavior of people in relation to humanoid robots, then they will use this model when communicating with other people. That is, violence is necessary robots can lead to violence against their own kind. And from this point of view it is worth considering the possibility of preventive legislative protection of the rights of robots. No matter how crazy it sounds right now.
In general, our modern laws are in no way prepared for the appearance of robots in our life. After all, laws, in essence, imply the existence of only two global concepts: a person (person) and an intangible object (thing). However, in robots, these two concepts in the future may merge.
For example, in jurisprudence there is the concept of "malice". Moreover, it is one of the fundamental semantic objects for criminal legislation. However, the development of artificial intelligence may in the future put us before the question of how to evaluate the actions of a robot - as evil intent (which means the robot will have to be recognized as a person) or as a mechanism with a mortgaged computer program (and therefore the consequences of its actions need to punish the owner). But do you agree to be responsible for the fact that your "very smart" robot broke the firewood? Do not chain it to plant. If lawmakers take the path of least resistance and assign robots to objects, this can actually put an end to the development of domestic robots. After all, people will simply be afraid of purchasing expensive devices that have some kind of unpredictable program in their “head”, for which they still have you, they won’t miss an hour.
And this is just one of many possible variants of legislative paradoxes. Whether still will be, when on roads there will be roboavtomobili. Should the owner, riding in the passenger seat, be responsible for the fact that his "unmanned" car knocked down a pedestrian who did not violate traffic rules? In all respects, should not. As a result, they will be guilty of trying to make manufacturers that should not be allowed to tolerate such reputational disasters. As a result, we are waiting for a lot of protracted perennial legal proceedings in which each side will try in every way to avoid responsibility, taking advantage of the unpreparedness of the legislative system for such situations.
And here's another interesting question: can the police interrogate the robot, "Where were you on Friday night?" Who should be responsible for the actions of the robot: he, the owner or the manufacturer? In general, our society is waiting for a lot of exciting changes, which most do not even think about.