📜 ⬆️ ⬇️

The revolution in AI will not produce droids, and toasters



Will the intelligent algorithms of the future look like general-purpose robots, as well coping with casual conversation and reading cards as with kitchen tasks? Or will our digital assistants be more like specialized gadgets - that is, it will not be a talkative chef, but a kitchen full of household appliances?

If the algorithm tries to do too much, it starts to have problems. The following recipe was created by an artificial neural network - this type of artificial intelligence, which is taught by examples. This algorithm has carefully studied about 30,000 recipes, from soups and pies to barbecue, and then tried to give your own recipe. The result was, let's say, unorthodox.

Chicken Pasta with Rice
')
2 pounds peeled hearts
1 cup chopped fresh mint or raspberry pie
1/2 cup grated katrimas
1 tablespoon vegetable oil
1 salt
1 pepper
2 1/2 tablespoons sugar

Mix without leaves and stir until the mixture becomes thick. Add eggs, sugar, honey, cumin seeds, and cook over low heat. Add corn syrup, oregano, rosemary and white pepper. Add the cream over the heat. Prepare to add the remaining teaspoon of baking powder and salt. Cook at 350 ° F for 2 to 1 hour. Serve hot.

For 6 servings.
And here is an example of a recipe created by the same algorithm, but instead of studying all the recipes in a row, he trained only on cakes. The recipe is not ideal, but much better than the previous one.
Carrot cake

1 pack of yellow cake mix
3 cups flour
1 teaspoon baking powder
1 1/2 teaspoons soda
1/4 teaspoon salt
1 teaspoon cinnamon
1 teaspoon ginger
1/2 teaspoon clove
1 teaspoon baking powder
1/4 teaspoon salt
1 teaspoon vanilla
1 egg room temperature
1 cup sugar
1 teaspoon vanilla
1 cup chopped pecans

Preheat oven to 350 degrees. Grease a 9-inch baking sheet.

Quickly beat the eggs until dark yellow. Set aside. In a separate bowl, whip the whites until hard. Speed ​​up the first mixture into the prepared form and soften the oil. Bake in the oven for 40 minutes, until the toothpick inserted in the center of the pie is clean. Cool in the form of 10 minutes. Put on a wire rack to cool.

Remove the cake from the mold until it is completely cooled. Serve warm.

For 16 servings.
Of course, if you take a closer look at the instructions, it will become clear that at the exit you will only get a baked egg yolk. But it's still an improvement. When the AI ​​was allowed to limit itself to a certain specialization, the amount of what needs to be monitored simply decreased. He did not have to choose when to use chocolate, and when - potatoes, when to bake, and when to boil. If the first algorithm tried to be a magic box capable of giving out rice, ice cream and pies, the second one tried to be something like a toaster - a specialized device for one task.

Developers involved in training machine learning algorithms have found that it often makes sense to create toasters instead of magic boxes. This may not seem intuitive, since AI in Western fiction more closely resembles C-3PO from Star Wars or WALL-E from the same movie. These are examples of general-purpose artificial intelligence (ION), automata capable of interacting with the world, like people, and performing many different tasks. However, many companies quietly — and successfully — use machine learning to achieve much more limited goals. One algorithm can be a chat bot that serves a limited number of basic customer questions about a phone bill. Another may give predictions about what the caller wants to discuss and display these predictions on the screen for the person answering the call. These are examples of narrow specialization artificial intelligence (IIES) - limited to a very small set of functions. On the other hand, Facebook recently retired its chat bot “M”, which never managed to cope with booking hotels, buying theater tickets, and so on.

The reason that we have an IIES instead of an ILL of the WALL-E level is that any algorithm trying to generalize the tasks starts to cope worse with the tasks that are given to it. For example, there is an algorithm , trained to produce pictures on the basis of the description. He is trying to create a picture from the text: "this is a yellow bird with black spots on the head and a very short beak." When he was trained on a data set that consisted entirely of birds, he did quite well (without taking into account the strange horn):



But when he was assigned to create anything from stop signs and boats to cows and people, he had a hard time. Here is the result of an attempt to draw "an image of a girl who eats a slice of pizza":



We are not accustomed to thinking that there is such a huge gap between an algorithm that does one thing well and an algorithm that does a lot of things well. But the mental capabilities of our current algorithms are very limited compared to the human brain, and each new task loads them even more. Imagine a home appliance about the size of a toaster: it's easy to make a couple of slots in it, install heating coils, and fry bread. But after that, there is little else to do in it. If you try to add a rice cooker and ice cream maker there, you will have to give up at least the cracks, and such a device will probably not be able to do very well at all.

Programmers use different tricks to squeeze the most out of IES algorithms. One is transferable learning: train the algorithm to work with one task, and it will learn how to perform another, closely related to this task, after minimal overtraining. People use transmitted training to train image recognition algorithms. The algorithm, which has learned to recognize animals, has already collected a lot of information regarding the definition of contours and texture analysis, which can be transferred to the task of determining fruits. But when overtraining an algorithm for recognizing fruits, the algorithm will undergo a “catastrophic forgetting”, that is, it will no longer remember how to identify animals.

Another focus of today's algorithms is modularity. Instead of turning into a single algorithm that can solve any problem, AI in the future will most likely be assemblies of highly specialized tools. An algorithm that has learned to play Doom will have a separate system for computer vision, control, and memory. Interconnected modules will be able to provide redundancy to prevent failures, and a mechanism for voting for the best solution to the problem based on different approaches. There may be a way to detect and correct algorithm errors. It is usually quite difficult to understand how a particular algorithm makes decisions, but if the decision was made through the interaction of algorithms, we can study the output of each of them.

Probably, we should not imagine the algorithms of the distant future in the form of WALL-E and C-3PO. Instead, we can imagine something like a smartphone, full of all sorts of applications, or a kitchen countertop filled with gadgets. When preparing for a world filled with algorithms, you need to make sure that we plan to meet not with thinking magic general purpose boxes that may never appear, but with highly specialized toasters.

Source: https://habr.com/ru/post/419869/


All Articles