For the murder committed by the roaming mobile, the program (and programmers) will be judged

2034th year. A drunk man stumbles along the sidewalk at night, stumbles and falls right in front of the rover who hits him and kills him on the spot. If there was a man behind the wheel of a car, death would have been recognized as an accident, since the fault would lie on a pedestrian, and not a single driver could dodge it. But standards for the “average driver” (the term “
reasonable person ” is present in foreign legislation) disappeared in the 2020s, when the spread of unmobiles reduced the number of accidents by 90%. Now we have to talk about the "average robot."
The victim's family is suing the maker of the robomobiles, stating that although the car did not have time to brake, it could drive around the pedestrian, cross the double solid and collide with the counter roaming car. Reconstruction of the incident on the basis of roboMobil sensors confirms this. The lawyer of the plaintiff, interrogating the lead developer of the car, asks: “Why didn’t the car turn away?”
Today, the court does not ask drivers why they did or did not do something. The question is controversial, because a person is mistaken - the driver can panic, not guess, react to instincts. But when the robot drives the car, the question “why?” Is quite acceptable. The ethical standards of people, which are not very well written in the laws, make many different assumptions, to which the engineers simply have not reached. The most important of them - a person can understand when it is necessary to retreat from the letter of the law in order to preserve his spirit. Now engineers need to teach machines and other robots to make sensible decisions.
Computerization of the management process began in the 1970s, when
anti-lock braking systems appeared. Now every year there are developments such as automatic steering, automatic acceleration and emergency braking. Testing of fully automatic cars, albeit with the participation of a human driver, is already permitted here and there in Britain, Holland, Germany and Japan. In the US, this is
permitted by law in four states and in the District of Columbia, and at least not prohibited by the rest. Google, Nissan and Ford claim that roboMobi will appear in 5-10 years.
')
Automatic vehicles collect environmental information from sensors - video cameras, ultrasonic range finders, radars, lidars. In California, robo-mobiles are required to provide all the data from the sensors 30 seconds before any collision, which has already accumulated enough - including a
collision caused by a Google machine . Engineers are able to recover events in the collision area fairly accurately, using the records of what the car could have fixed, the alternatives it considered, and the behavior logic. The computer can be forced to repeat their reasoning - as you can ask the person who played the game or driving simulator.
Regulators and parties to lawsuits will be able to maintain superhuman safety standards for unarmed vehicles, and carefully investigate collisions that will still occur - albeit rarely. Manufacturers and programmers will protect the actions of their products in a way that today's drivers never dreamed of.
Driving is always a risk, and decisions about its distribution among drivers, pedestrians, cyclists and property, contain an ethical component. For both engineers and all people, it is important that the decision-making system of a machine weighs the ethical implications of its actions.

Clashing Google Car with BusThe usual reaction to ambiguous situations from a moral point of view is to follow the law while minimizing damage. The strategy is attractive - it not only allows the developer to easily defend the actions of the car (“We completely followed the law”), but also delegates responsibility for determining ethics to the legislators. Unfortunately, it also places a heavy burden on the law.
For example, in most states, the law relies on the common sense of drivers, and says little about the behavior before a collision. In the example described, the car, following the exact letter of the law, does not cross the double continuous one, risking to run over the drunkard - although on the other side of the road there is only an empty mobile car. The law rarely makes exceptions in such specific emergencies as a person falling on the carriageway - and if it does, as is customary in the state of Virginia, for example, the text of the law implies that crossing the double continuous lawfully until the car has an accident ("If such movement can be made safely"). In this case, developers will have to decide - in which cases it will be safe to cross a double solid.
A rover mobile will seldom be 100% sure that the road is empty and you can cross a double continuous without fear. He will estimate a confidence level of 98%, or 99.99%. Engineers will need to decide in advance what level of confidence will be sufficient to cross a double solid, and how the permissible value may vary depending on what the robot is trying to avoid on the road - plastic is a bag or a fallen pedestrian.
Already, robomobili make decisions about the possibility of violation of the law.
Google admitted that its cars are allowed to exceed the speed in order to stay in the stream - where the slowdown is dangerous. Most people would prefer to exceed the speed in different situations, for example when trying to quickly get to the hospital. Chris Gerdes [Chris Gerdes] and Sarah Thornton [Sarah Thornton] from Stanford University are
against the rigid inclusion of laws in decision-making algorithms, since drivers, it seems, consider laws to be flexible enough to evaluate the cost of breaking them compared to the potential gain in speed. No one wants to crawl for a cyclist a few kilometers due to the fact that your car refuses to at least a little to call for a double solid.
And even staying within the law, a romo mobile can make many small decisions that are sensitive from a security point of view. Usually the lanes on a highway are almost twice as wide as a typical car, and drivers can use this width to bypass garbage, or away from unevenly moving cars.
In the 2014 patent, Google
develops this idea and describes how the mobile can be placed on the strip to reduce risks. The company cites the example of a car on a three-lane road with a truck on the right and a small car on the left. To optimize the security, the mobile should move a bit to the left, closer to the small typewriter.
It looks sensible, and usually everyone does so - consciously or unconsciously. But there are ethical issues. Moving to the side of a small car, the robil mobile reduced the risk, but unevenly distributed it. Should a small machine take more risk just because it is small? If it was a question of addiction of a particular driver, it would mean nothing. But if such redistribution is formalized and extended to all ro-mobiles, the consequences will be more serious.
In each example, the robomo is taking into account several values ​​- the value of the object that it can hit, and the value of its passenger. People make decisions instinctively, and the romo mobile will do this based on a carefully thought-out risk management strategy, which defines risk as the amount of damage from an undesirable event multiplied by its probability.
In 2014, Google also patented a risk management application. The patent describes a car that may decide to change lanes in order to better see the traffic lights. Or, the car may decide to stay in the lane to avoid the risk of a collision - for example, because of readings from a faulty sensor - but at the cost of this will be poor visibility of the traffic light. The result of any decision is assigned a probability, as well as a positive or negative value (advantage or loss). Each value is multiplied by the probability, and the values ​​obtained can be summed up. If the benefits exceed the losses strongly enough, the machine will perform a maneuver.
The problem is that the risk of a collision is very small - the average driver in the United States gets into an accident once every 257,000 kilometers, or once every 12 years (
in Russia - once every 1.6 years . Perhaps the difference is due to the fact that in the USA much more often drive on the highway - approx. Therefore, even starting to receive a huge flow of data from ro-mobiles, when they take to the streets, we will be able to get estimates of the probabilities of various events very soon.
Estimating the cost of damage is even more difficult. Damage to property is easy to assess - the insurers have a lot of experience in this matter - but injury and death is another matter. The history of the appropriation of a person’s life of any value has many years, and it is usually expressed in the amount of money that could be spent to prevent the average victim. The safety improvement, which has a 1% chance of saving the lives of 100 people, is one average casualty. The Department of Transportation recommends spending $ 9.1 million to prevent casualties. The number is derived from marketing data, including surcharges that people demand for hazardous work and the amounts people are willing to spend on safety equipment - for example, smoke alarms. We need to weigh not only security, but also the loss of mobility, or the time spent on the road, which the Department estimates at $ 26.44 per hour.
In words, everything looks beautiful. But the assessment of risk in lost lives and time spent on the road does not include different moral assessments of how we put people at risk. For example, a robomobil, evaluating the lives of all people equally, would have to give more space on the road to a motorcyclist without a helmet than to a motorcyclist in full gear, since the former will survive less likely. But this is not fair - can one be punished for taking care of one’s security?
Another difference between the ethics of robots and humans is that the ethics of the first can be distorted by programmers, even with the best of intentions. Imagine that the algorithm has adjusted the size of the buffer zone for pedestrians in various areas on the basis of an analysis of the size of compensation for claims filed by pedestrians involved in accidents. On the one hand, it is reasonable, efficient and made from the best intentions. On the other hand, smaller penalties may depend on the average income of people in a particular area. Then the algorithm will punish the poor by asking them a smaller buffer zone, thereby slightly increasing their risk of being shot down.
There is a temptation to dismiss such questions as purely academic, but they cannot be circumvented, because programs take everything literally. You will have to evaluate the consequences of actions before they need to be done - at the development stage, and not at the stage of creating patches for software.
Partly because of this, researchers use hypothetical situations in which a machine must choose between two evils. One of the most famous tasks of this type is the problem of the trolley.
Heavy unmanaged trolley rushes along the rails. On its way there are five people tied to the rails by a crazy philosopher. Fortunately, you can switch the arrow - and then the trolley will go on a different, siding. Unfortunately, on the siding is one person, also tied to the rails. What are your actions?
Will you sacrifice one life for several? If not, because of your inaction, people will still die, so how can you deal with this contradiction?
Books have been written on the themes of such experiments, and they allow you to test simple and direct systems that deal with ethical issues and find areas where it would be nice to delve into some nuances. Suppose we programmed a robility vehicle to avoid pedestrians at all costs. If a pedestrian suddenly appears in a two-lane tunnel, and the car is not able to brake in time, she will have to turn off the lane, even if she is in the way of the bus with passengers. The likelihood of such an event is not as important as the problem it exposes in the logic of the mobile - that the absolute superiority of a pedestrian’s value over all other people using the roadway can be very dangerous.
Ethics in roaming industry is a solvable task. We know this because in other areas we have already found the opportunity to handle about the same risks and benefits safely and reasonably. Donor organs are distributed among patients based on a metric calculated from potential years of life and the quality of their stay. People from such necessary professions as a farmer and a teacher are freed from military conscription.
The tasks of robots are harder. They need to be solved quickly, on the basis of incomplete information, in situations that programmers could not anticipate, using the ethics that have to be built into the algorithm too literally. Fortunately, people do not expect superhuman wisdom from them - only a rational justification for the actions of the machine, which also evaluates ethical issues. The solution should not be perfect - but thoughtful and one that could be protected.