In June 2012, a group of researchers from Google launched a
neural network on a cluster of 1000 computers (16 thousand processor cores; 1 billion connections between neurons). The experiment became one of the largest in the field of artificial intelligence, and the system was initially created to solve practical problems.
The self-learning neural network is a fairly universal tool that can be used on different data arrays. Google used it to
improve speech recognition accuracy : “We received a 20-25% reduction in the number of errors in recognition,” said Vincent Vanhoucke, head of speech recognition at Google. “This means that many people will get an unmistakable result.” The neural network has optimized algorithms for the English language, but Van Houk says that similar improvements can be achieved for other languages and dialects.
The neural network is also used in the Google Street View project for processing small fragments of photos, where you need to determine whether the number on the fragment is the house number or not. Surprisingly, in this task, the neural network shows better recognition accuracy than humans.
In the future, the neural network will be used in other Google products, such as image search, Google Glass glasses and unmanned vehicles. Jeff Dean, a neural network development project employee, says that in a car, the system can take contextual information into account, including information from laser range finders or, for example, the sound of a motor. Jeff Dean says that a powerful neural network is able to use a lot of contextual information in the training process - for this reason they decided to create such a large cluster of 1000 servers, while most researchers are testing neural networks on one computer.
')
The first results of the experiment with Google's neural network were published in June 2012. Tests have shown that the neural network successfully yields to self-learning. After viewing 10 million random frames from Youtube, neurons formed in the neural network that selectively respond to the presence of faces in images. According to scientists, Google's neural network in the process of self-study worked about the same way as neurons work in the mammalian visual cortex. With the proviso that the Google neural network, despite its scale, is still much smaller in terms of the number of nodes than the visual cortex neural network.
The illustration below shows a composite image that corresponds to the optimal stimulus for a cat's neuron classifier during the first experiment.

Composite image that corresponds to the optimal stimulus when the neuron is being classified as a human face.
