📜 ⬆️ ⬇️

"World of the Wild West" eyes of the developer

A masterpiece from HBO, the first season of which bypassed even the Game of Thrones, it tells of a futuristic amusement park, whose guests can indulge in all sorts of sins with hyper-realistic android robots. Skirmishes, fights, orgies - the rich pay huge sums for the right to be unscrupulous and unpunished. Especially killed and raped robots after each alteration is redrawn, cleared of memories and returned to the place, in a vicious circle of the prescribed role. Until one day the system fails ...

Let's look at this story through the eyes of the developer - and find out:


All a few spoilers are under the cut - so read boldly.

')

What do Dolores and Alice of Yandex have in common?




The principle of operation of the devices in the park is in many ways similar to the way voice assistants (Siri, Amazon Alexa, Yandex.Alisa) work - having heard the keywords from the park visitor or another android, the robot launches the corresponding script. For example, if you asked madame in the saloon how much an hour it would cost from one of her charges, the conversation follows the purchase scenario. If in the process of communication you switched to another topic - for example, you asked where your character has a scar on his face, then he may switch to another familiar scenario.

In case you ask a question for which the android host has no blanks, the machine will turn on the improvisation mode. If the voice assistants are only trying to laugh it off, answer as evasively as possible or say directly that they do not understand you, then the android in Westworld will try to divert your attention to something familiar to him or start telling stories to return you to the information field he is familiar with.

Androids in the park are quite familiar with psychology and know what to say in order to gain confidence - or, conversely, cause disgust. Each of them has mini-scenarios, the ultimate goal of which is to challenge the visitor to the desired reaction with the help of correct cues and to demonstrate suitable characteristic emotions. In addition, since the majority of visitors come to the park for the sake of thrills, the robot will do its best to influence emotions, attacking, threatening and provoking.

In this case, everything that does not affect them personally or is not familiar to them, robots prefer not to notice or perceive as routine, not worth attention - for example, quietly carrying a gadget or conversations between visitors about their normal life.

How do androids learn - and how do they get character?




The robot learns the movements in the following way: in the virtual world, a three-dimensional animated model of a person or animal, similar to a robot body, is launched. The host studies it and then starts repeating movements, each time making small changes in motility until it reaches the KPIs specified by the developers. In this case, the goal may be not only to achieve the most effective way of moving and performing actions. So, a characteristic feature of the robot Maeve was clumsiness, so she, most likely, learned to walk so as to periodically touch furniture, stumble or spill drinks from time to time. And the speed and accuracy of the shot will depend on whether the hero should be the main character-lover or a “passing” character from the gang.

The “character” of the android is formed by loading themed content into the robot: books, films, TV shows. Someone shows mostly thrillers, someone - melodramas, someone (like Peter Abernathy) and Shakespeare is read altogether. Based on this, the character assimilates key emotions, gestures, patterns of behavior and cues characteristic of the genre of his plot and his future role. Also, behavioral developers work with the hosts, who load the so-called “presets” of pre-prepared patterns of behavior, under their leadership, the hosts study specific movements and sequences (for example, playing cards), and also acquire personal characteristics — characteristic intonation, accent, and so on. .

The parameters set for each android (degree of aggression, openness, love) have a great influence on his behavior. Robots only partially adapt to the wishes of visitors - otherwise they would lose their realism, so the character with the character of the villain will be equally rude as with a young schoolgirl, who so far in horror recoils from what is happening, and with a thirsty blood and orgy thirty-year clerk.

The preset parameters with which the character spent most of his lives have a huge impact on his behavior. This is due to the fact that neural connections in the mind of the robot were built on the basis of the experience accumulated day after day - especially if it operated a lot in the improvisation mode. Drastic changes in personal characteristics can work well in the script mode, but in the improvisation format the character will “get used to the role” for a long time - this is due to the fact that in each improvisation the character selects answers based on the previous dialogues and the previous context, based on general statistics on all lives. Regardless of whether the character has memories, the experience of communication in any case accumulates. In other words, the more he improvised in the role of a good-natured farmer, the more difficult it would be for him to start acting in improvisation as a ruthless maniac killer and the longer the change of role would take.

Consciousness in robots - is it possible?




Improvisation in robots is built on the same principle as in humans - based on accumulated experience.
It is obvious that the pampered infantile girl from a wealthy family is unlikely to be able to resist the robber, who is trying to take away her purse in a dark alley. But if the same girl would have grown up in a shelter and since childhood she has been carrying on with a criminal gang - the chance to fight back would be higher. At the same time, the experience for robots, reinforced by negative emotions and upheavals, is of greater importance - just like for people who “burn with milk and blow to the water”.

Here we are faced with an important nuance - the memories of the androids were not physically erased, they simply blocked access to them. And any blocking, as you know, can be circumvented if desired and available. So, the desire to save all the data about visiting the park has played a cruel joke with the Board.

As a result, if robots for a long time deviate from the plot and they have access to the memories - then they complete their education based on the total experience of all their lives and hypostases. Mixing memories from different roles leads to unpredictable retraining of models, so it will be very difficult to predict the generated scenarios.

(Carefully, spoilers are under the cut)
A striking example: Teddy's eternal “positive hero” offline and even after radical reconfigurations could not become a ruthless killer - the very need to be a villain contradicted all his accumulated experience, so he chose to self-destruct in order not to play an alien role.

As for Dolores, she had, in principle, the greatest experience of all the androids in the park, numerous deviations from the scenarios (classes with Arnold and Bernard), a huge amount of negative shocks (she was raped and killed almost daily for many years) and the characteristic role of a brutal White maniac killer. Thus, we see that the words about “the fact that she prefers to see the beauty of this world” in terms of the degree of influence on her behavior cannot be compared with the negative factors that ultimately made her a ruthless revolutionary.

Among her radical followers are, as a rule, those who often triggered emotions, because they are more inclined to retrain and work on new scenarios, including, among other things, abandoning familiar attitudes, such as a ban on killing living creatures. Thus, a chain is built in which an abundance of emotions leads to an increase in departure from standard scenarios, and the knowledge accumulated in the process of improvisation makes it possible to switch to a mode of freedom of action - and remain in it.

Thus, the emergence of "consciousness" in robots is nothing more than a transition to the "eternal improvisation" mode due to a great accumulated experience. But what are the limits of this improvisation and how will the system react, faced with a huge number of unknown stimuli in the real world, we have yet to learn.


5 security mistakes




The main mistakes that led to the fact that real blood was shed in the park, in our opinion, were five:

The first weakness is the inability to quickly restore damaged communication systems, which, as it turned out, are quite easy to destroy with simple physical actions.

The second is the possibility of introducing androids into autonomous control mode, in which they could disconnect from the main network. Access to them in this case was only possible through physical contact. It would be more reasonable to provide a certain timeout - that is, if the robot has lost contact with the central server and does not receive additional signals from the outside, then after a while it simply turns off. Yes, in this case, disruptions in communication would have been noticeable to visitors - but still the uncontrolled behavior of the android (especially in remote parts of the park) is clearly not better. As a result, the androids remained in improvisation mode for a long time or were controlled by a system to which the park employees did not have access.

The third is that several members of the team have access to the park’s entire functionality at all, thanks to which they can redefine the behavior of all living entities in the world of Westward. For example, massively change all passwords: because of this, it was difficult for the technical services to regain control of the devices. And since there are several such persons, the identification of the culprit may be delayed.

The fourth is the lack of verification of access rights to perform certain actions. Security personnel would be well-advised to place additional restrictions on the performance of critical actions, such as awakening from a dream or striking. Thanks to this, even if the attacker (either the host itself or another android) takes control, he will not be able to harm anyone.

Finally, the organization of the work of the staff itself does not seem to be too optimal - departments poorly communicate with each other, the security system contains obvious gaps, remote diagnostics are not carried out at all (although in some scenes we see that the staff can block all androids in a specific location at the touch of a button) and updates. The latter can be explained by the lack of desire to “shine” the internal kitchen once again in front of visitors - at the same time, physical removal of androids is invisible impossible, guests can still see the staff and can understand that something has gone wrong.

“No, son, this is fantastic” - or is it really possible to build such a park now?




The main difference between serial robots and existing ones in the real world is the ability to 100% realistically emulate long conversations (in the “World of the Wild West” it was mentioned that the creators of the park achieved this in a year). Real-life systems with artificial intelligence are limited in understanding the context and, as a rule, are not able to memorize more than a couple of previous replicas. This is mainly due to the fact that current algorithms assume that in order to preserve the context, you need to store and process a very large amount of data. Remember chess - everybody can think over moves one step ahead (only 8 options), thinking 2-3 moves ahead is already much more difficult, and almost no one can calculate the game for 7-8 steps.

Modern systems can remember 1-3 previous replicas and answer them very realistically. They can already understand the emotions with which the replica is colored (both voice and text) and respond accordingly for several phrases, but not longer. That is, Alice can understand that some phrase is a joke or an abstruse quotation, and with this knowledge she will select the appropriate answer. But she is not able to go back 7-8 dialogues (or days) back and notice that you used to think differently. Or, for example, remember that you were rude to her 2-3 days ago - and therefore the next time you meet is touchy.

With the current level of technology development, a computer needs to spend several days forming a long connected conversation - which, of course, is not suitable for maintaining a lively conversation. The only exception in which modern androids will be able to maintain a long and connected conversation is if they lead the user in a standard scenario with very few options. For example, this is how chat bots work in online stores.

Androids keep the experience from communicating with the user to their physical “death” - so, if a visitor re-arrives at the park a year later, the robot will remember and recognize the guest, if during this time another host or visitor does not kill him.

As for the park itself ... So, even if the interfaces will not look so futuristic, it is more than real to build something like the available means today. And not somewhere in Ilona Mask’s underground bunkers, but in small IoT companies.

In Westworld, an advanced solution based on the technology of the Internet of Things is used - this is when devices combined into a single system autonomously operate according to specified scenarios, and a person is needed only to control and resolve emergency situations. A simple example is the drones that drive themselves, change the script on the go, interact with other cars and objects on the road ... However, a person can always give him the command to stop or change the route. The main difficulty here, as in the case of Westworld, consists in reacting to unforeseen situations - for example, if an animal suddenly runs out onto the road or if the sign that is installed is not true. Although in the World of the Wild West, reality is largely formed by the same puppeteers as behavior. And, as we see, not always creators can control their creations ... Or always? Looking forward to your reply in the third season, Mr. Ford.

Source: https://habr.com/ru/post/419169/


All Articles