📜 ⬆️ ⬇️

How to teach a robot to cook?

Researchers at the University of Maryland and the Australian ICT Research Hub have announced a method that will allow robots to learn how to cook themselves. In this case, robots will use a fairly common method today - they will watch videos on Youtube.

image

The researchers used newly developed deep learning methods and neural networks — programs that simulate the human brain using artificially created neurons. The scientists “fed” the training videos published on Youtube, and the system, in turn, identified the objects used by the chefs and tried to understand what actions were needed for cooking.

So scientists have developed a system that in the future will allow the robot to perform the most basic tasks in the kitchen, as well as recognize them directly while watching videos.
')
Although, it would seem, some robots already know how to cook. For example, Cooki from a startup Serenity performs actions in a given sequence, boils water, adds pasta to it, fries meat - in general, does everything to replace you in the kitchen.

image

image

image

The Motherboard edition interviewed a member of the development team working on training the robot in video-based cooking, Cornelia FermĂĽller.

How do you make a robot learn to cook?

There are many studies on the subject of robot training, this is a well-studied area. But the work is aimed at copying movements - the robot sees the action and repeats it. In real life, everything is much more complicated, because you need to adapt to different situations. A jar or spatula can be of different sizes, can be in different places on the kitchen table, the table itself can be filled up with something - all this complicates the work.

The robot must be able to perceive the world. For example, a “cook” needs to open a can of peanut butter, which is always in one place. And the man moved it. You need to prepare the car for such situations.

Is the deep learning technique and the use of YouTube videos enough to overcome these problems?

The robot watches regular videos, not scenes shot in the lab. There are many ways to shoot, videos differ from each other in angle, scenes, lighting and many other characteristics. An overview may obscure the hand, the product may block other objects. Here we use deep learning techniques . This software allows you to recognize objects, including when they are in the hand of a person.

Our robots are not cooking yet. But we are already capable of something: we can pour water and interfere with something, but robots capable of autonomously working and preparing food, learning this independently, have not yet done so.



What technical obstacles do you encounter?

First of all, you need to solve all the above problems. We must teach the robot to perform actions in different conditions, places and situations. On the other hand, anything can happen while cooking. Everything must be taken into account.

A robot must have intelligence, the ability to reason, recognize unforeseen events and respond to them. That is to be autonomous. We are continuing research, working on artificial intelligence, but have not yet created it. Progress is underway, but it takes more time.

Getting back to learning - how did you get to know how to use videos to train a robot?

People produce manipulations in various ways. It is not enough to watch the video and get the robot to perform the same action. You and I can open a can of peanut butter in absolutely different ways.

We solved this problem in the following way: we need to think not about the movement, but about the result. The robot breaks the action into small pieces.

To open a jar of peanut butter, you need to perform the following actions: move your hand to the side of the jar, take hold of the lid, turn the lid, remove the hand with the lid. Understanding these actions, this “grammar”, as an understanding of the language - just as you can break it down into a sentence.

When will robots start watching videos and cook like us?

It is hard to say. This will happen, especially if the situation is calm, children will not run around. But it takes more time. It is necessary to deal with all kinds of perception of things. For example, when the robot goes to the refrigerator, he must understand that the salad may not be at home - you have to go to the store.

Source: https://habr.com/ru/post/375451/


All Articles