Today, we continue to publish original abstracts of speeches by speakers of the Russian Code Cup 2013. This time, we present to your attention the report by Ken Goldberg.
Ken Goldberg, inventor of the world's first web-based robot, professor at the School of Computer Science at the University of California, Berkeley.
I would like to devote my speech to the discussion of modern robotics and its immediate future. You have probably heard about the “Google Mobile”, which goes without a driver? People often ask me why Google is involved in robots? I believe that this is due to the great interest of the "search corporation" in cloud technologies. I personally do not imagine future robotics without the use of "clouds". I suppose you ask yourself a question: what kind of robots will they be that without the “cloud” will not be able to work? Let me tell you about cloud robotics and my latest research in this area.
')

I'll start with a historical excursion. The Internet as we know it was created about 20 years ago. In 1993, I first learned about www. My students and I wanted to come up with something interesting using this new technology. Our project has become a garden robot connected to the network - “Telesarden” (“Telegarden”). They made a flower bed with a diameter of 2 m and a height of 0.5 m, planted the plants, installed a robotic “hand” in the center and wrote a simple browser-based interface. Of course, in 1993 there were few places from which it was possible to gain access to the TeleSad.


The robot allowed to observe the plants through a video camera, as well as water them. Anyone could water our flowerbed through the net. To those who carried out N waterings, we gave seeds that could be planted with the help of a robot. Of course, over time, the number of remote users has grown so much that they only interfere with each other, the flowerbed has fallen into disrepair. So our experiment, among other things, clearly illustrated the saying about seven nannies.
Since then, both the Internet and robotics have taken a step far forward. People have already begun to get used to the vacuum cleaner robots, military robots are becoming commonplace, doctors are already exploiting about 2,000 medical robots, mainly in surgery. In general, robots have already entered our lives.

The orange boxes in this snapshot are robots that sort orders from huge warehouses of companies like Amazon. Robots also bring pallets of goods to human operators, who directly collect orders.
And this example of a robot is known to a much larger number of people; this is the Kinect module for the Xbox game console:

The term "cloud robotics" itself was coined by James Kaffner of Google. He was one of the first to start saying that cloud computing would greatly facilitate the execution of their tasks by robots. For example, in order for the robot to wash the dishes, fill the bed, clean up the apartment, it must solve a lot of complex analytical tasks. This is due to the difficulties in recognizing the environment and the huge number of objects in this environment. Computational capacities capable of coping with such tasks in adequate time can be too expensive to build into simple robots. Not to mention the huge amount of descriptive database of items that he must keep in himself. It is much cheaper and easier to use remote resources for recognition and computing.
By the way, the operating system ROS for robots has already appeared, something similar to Linux. It has open source code and it develops rather quickly.
Last year, I participated in an African project to create a network of robotics on the Black Continent. For this, it was decided to create a number of very cheap robots intended for use in schools and universities. A competition was announced to create a robot worth $ 10. We understood that the goal is most likely unattainable, but it will be a kind of guide for the participants. The winner was an amateur enthusiast from Thailand: he took the remote for the game console, added several sensors, and used two Chupa Chups as a counterweight. And it turned out a robot worth $ 8.9, including the cost of candy. He called him Lollybot (Lozenzerobot).

Anyone, even the most complex robot, will face tasks that he will not be able to solve. In such cases, he will be able to go online or call for help from a person. In general, cloud robotics has the following advantages:
- Access to huge amounts of data
- Access to the most powerful computing systems
- Work in an open source system
- The possibility of self-learning through data exchange with other robots
- Ability to seek help from a remote human specialist.
Remember, in the movie "The Matrix" there was a dialogue:
- What can you fly on this?
- I can not yet. Operator: Download the program of the pilot of the helicopter B-22.
A lot of research is being done in this direction, you can learn about the most interesting of them
here .
Now with my students I deal with two topics. Imagine that the robot needs to be removed from the table. “Looking” at the table with the camera, the robot may have difficulty recognizing objects, determining their location in space, positioning their manipulators. To solve these problems, there is a complex of techniques called believe space, that is, the space of representations, hypotheses. We know what things we can meet in the conditions around us. Based on a number of recognized images, after processing a large amount of data, predict the presence of some other objects. All this is inappropriate to carry out by the robot itself. And we are trying to solve the problem of capturing an object of a complex shape using a cloud.

The object model must contain certain geometric tolerances, because the robot will constantly collide with objects slightly different from the standards in the database. In general, there are many works devoted to the problem of capturing an object of unknown and exact form:

Reuse PRM based on known obstacles. Lien and Lu, 2005

Adaptation of Motion Primitives. Hauser et al, 2006

Policy transfer for footstep plans. Stolle et al, 2007

Learning where to sample. Zucker et al. 2008

Skill trees from demonstration. Konidaris, et al, 2010

Path retrieval from static library. Jetchev and Toussaint, 2010
Suppose that we have two captures and an object, whose shape the sensors of the robot are in this form:

Obviously, the actual geometric parameters of the object may differ from what the robot is currently seeing. And he must develop a safe way to capture. You can conduct a probabilistic analysis of various forms using the Gauss curve; it is possible to approximately calculate the location of the center of gravity, and on the basis of calculations, develop a capture algorithm. Let us proceed from the assumption that our grips are parallel to each other. So, the sensor determined that one of the grips touched the object. After that, it is necessary to conduct geometric testing of different capture algorithms and estimate the probability of a safe, reliable capture. All these calculations can be carried out in the "cloud", parallelizing the geometric analysis for each form option.
Look at this collection of different objects:

Looking at these projections, you can more or less assume how best to capture them. But intuition often leads us. Not always our own choice of the capture point is optimal. We tested our methodology in real conditions on a simpler subject:

Despite the fact that our intuition and experience tell us that all three capture options (in the upper right corner of the image) can be very successful, mathematical calculations suggest the opposite - the first option is the best. Moreover, the probability of successful capture is almost 4 times higher than in the third variant, and see how insignificantly they differ from each other. Experiments have confirmed our calculations.
The second task we are working on is the problem of object recognition.

To solve the problem, the robot can take a picture of an object that it did not recognize and drive it through Google Goggles - a free graphic search system from Google. Of course, ideally, it should be a specialized database that can also provide information about the strength and mass of the object, its size. It will be very nice if the system also prompts the best capture option, or some of the best ones. But for this you need to do a lot of work on tagging, marking. For training other robots, it is advisable to provide feedback to the database regarding the success of the application of the proposed algorithms. So far, the system developed by us is far from perfect: the probability of recognizing objects is about 80%, and only 87% of the recognized ones can be safely captured by a manipulator.
At the beginning of my speech, I mentioned surgical robots, and I want to tell you more about their use. There is such a method of treatment as
brachytherapy , that is, contact radiation therapy. According to statistics, every sixth man gets prostate cancer. One of the methods of treatment is the introduction of needles into the tumor, through which point-irradiation is carried out.

To reach the prostate, these needles must pass through a number of other important and tender organs and tissues. As you understand, this in itself is a very risky procedure [well, still !!!]:

It was proposed to abandon the matrix and insert the needles at different angles to reduce the risk of damage to surrounding organs.

For this, a special algorithm for calculating vectors was developed. This kind of computation requires a lot of power, and if you have access to the cloud, this is not a problem. There are already practical results that confirm that the rejection of the parallel injection of needles is not less effective and at the same time safer for patients. Robots are now being developed that will help surgeons more accurately insert needles.
The second area where surgical robots are being introduced is the introduction of implants into regional types of cancer. These implants can function as a lens to focus radiation in a specific area of ​​the tumor. To do this, it is necessary to correctly determine the shape of the implant, taking into account the geometric parameters of the channels, cavities and organs of the patient.

In August, we presented our method for calculating the non-linear delivery of implants to the irradiation site. Thanks to 3D printing, which becomes cheaper every year, it is possible to manufacture not only implants themselves, but also channels of any necessary configuration for introducing irradiating instruments.

The first results of a clinical study of our method already give serious reasons for further optimism. We called another medical example of using cloud technologies “superhuman” surgery. We are talking about
laparoscopy , and the robot performs the function of an assistant, while the surgeon controls his actions.

There are lots of functions that could be automated a long time ago. For example, sewing a cut is a matter that does not require a great deal of intelligence, but the highest concentration. The surgeon can specify where to start the suture, where to finish, and then to entrust this function to the robot. Just program this procedure until it fails due to the high complexity. Therefore, the best option is to “learn” the robot using the example of a person’s work. Analysis of the movements of surgeons showed that human movements are not perfectly smooth, there is always some kind of tremor:


So to say, a lot of noise. To smooth the surgeon's action, we used dynamic distortion technology used in speech recognition.

The resulting smooth trajectory can already be transferred to the robot for execution:

Using the methodology of iterative learning, we can adjust the algorithm of the robot movements and perform this operation as accurately and efficiently as possible, after which we dramatically increase the speed of the operation.
Another task in surgery, where you can use robots, is to extract foreign objects from the human body: bullets, fragments, and so on. Work in this direction is also underway. So far, a person copes with this operation three times faster than an autonomous system. But it seems to us that this gap in speed can be easily eliminated through the use of cloud computing and additional optimization of the algorithms.
In conclusion, I want to say a few words about what, in my opinion, is waiting for us in the near future. We will be surrounded by devices with their own "brains". Well, or at least with RFID tags. The number of devices and devices connected to the Internet to obtain the necessary information will increase. That is, there will be the so-called "industrial" Internet, the "Internet of things".

Of course, for this, it will be necessary to resolve the issues of reliability of manufacturers of all these “smart” devices, increasing the bandwidth of the network infrastructure, and protecting against hackers. At the same time there will be a problem of repair and maintenance of such a quantity of electronics. By the way, this
kid is one of the first messengers of the emergence of household robots without their own computing power:

It is just a platform, and all management is carried out by the program installed in your iPhone.
Great progress should be expected in the creation of anthropomorphic and zoomorphic robots, and the success of developers from Boston Dynamics is so great that it even scares a little. Flying robots, quadrocopters are also experiencing rapid development.
I summarize: why do we need cloud robotics? For use in the development of open source technologies, to provide opportunities for learning robots from each other and from people, gaining access to huge amounts of data and computing power.
Thank you very much for your attention.