📜 ⬆️ ⬇️

"Neuromorphic chips": a different look at machine learning

Modern computers that most of us use are built on the basis of the so-called von Neumann architecture. This method is well suited for solving equations and running various algorithms, but not for image or sound processing. And although in 2012, Google taught artificial intelligence to distinguish cats in video, for this company took 16 thousand processors.

Therefore, humanity is working on the invention of new architectures that would allow machines to more effectively interact with the environment. One such solution was the neuromorphic chips, which we want to talk about in today's material.

/ Flickr / The Preiser Project / CC
')
Neuromorphic chips model how our brain processes information — like billions of neurons and trillions of synapses react to signals from the senses. Connections between neurons are also constantly changing, reacting to images, sounds, etc. We call this process learning. The idea is to get the chips to do the same.

Even if the neuromorphic chips will be inferior in the "performance" to the real brain, they will still overtake modern computers in terms of training and processing sensory information.

Cognitive development


The very idea of ​​neuromorphic chips is quite old. Carver Mead, a professor at the University of California, introduced this term in 1990, noting that analog chips, unlike binary ones, can imitate brain activity, but failed to bring the idea to life and create such a chip. However, today several companies are actively engaged in the implementation of this architecture in silicon.

Comparison of conventional architecture with neuromorphic ( Source )

In 2008, commissioned by DARPA, IBM Research began work on a neuromorphic chip. And after 6 years, in 2014, scientists presented to the public the TrueNorth system, consisting of 1 million digital neurons and 256 million synapses enclosed in 4,096 synaptic nuclei.

TrueNorth is a modular system that consists of several chips that are brain neurons. Connecting such chips among themselves, scientists form an artificial neural network. According to the company, TrueNorth consumes less power than its "classic" counterparts.

A neurochip with 5.4 billion transistors consumes 70 mW of power, while an Intel processor, in which transistors are almost 4 times less, requires about 140 watts. The plans of scientists even more to reduce power consumption and the size of subsequent versions of TrueNorth, so that they can find application in mobile devices or watches.

IBM expects TrueNorth to become a new milestone in the development of computer technology and will be used by high-performance systems, for example, in data centers.

Interestingly, a new programming language was created by the company to work with neuromorphic chips. At the core of the language is the so-called concept of corlets (Corelet) - object-oriented abstractions of neurosynaptic nuclei. In the software architecture, each corlet has 256 inputs (axons) and 256 outputs (neurons) that connect all the cores with each other.

“The interaction of the processor and memory in the traditional architecture is consistent,” said Dharmendra S. Modha, the lead researcher of the SyNAPSE project. - Our architecture is a set of LEGO cubes. Each corlet has different functions, and you simply combine them. ” For example, such a system can be used to search for people in a crowd. One corlet may look for a certain shape of the nose, another - the color of clothes and so on.

/ Flickr / IBM Research / CC

But IBM is not the only company engaged in such developments. Among potential market participants are such giants as Google and, not least, Qualcomm.

Not long ago, Qualcomm held at its headquarters in San Diego, a presentation of the capabilities of the new neuromorphic chip. A small robot the size of a pug called Pioneer drove up to a children's toy, and then began to push it toward three short columns.

Lead engineer Qualcomm Il U Chang (Ilwoo Chang) indicated with both hands where to place the figure, and Pioneer, recognizing the gesture with the help of the built-in camera, completed the task. Then he went to get another toy and brought it to the same convoy without any prompting.

The robot was able to perform tasks for which, usually, powerful specialized computers are needed. Pioneer is already able to recognize new objects and arrange them according to similarities with other objects, responding to gestures.

Qualcomm notes that the neuromorphic chip that controls the robot is digital, not analog, but still emulates various aspects of the behavior of the human brain. The creators said that their processor, placed in mobile devices, computers and robots, will allow the machines to self-learn.

The project was named Zeroth and, according to company representatives, the first such chips were supposed to go on sale in 2014, but this did not happen. However, in 2015, the company still presented the same computing platform.

New machine learning, but not today


As noted above, these chips will allow our familiar devices to self-learn. For example, medical gadgets will learn to recognize vital signs in order to preventively influence the condition of patients. Smartphones will learn to predict the desires of their owners.

However, there are still some obstacles to be overcome. The problem of neuron layout is still not solved - it is difficult to compare the size of the “brain” with its capabilities. Difficulties arise at all stages — assembly, power delivery, heat sink, and topology management.

Another set of difficulties associated with the abstract kind of neurocomputation. How close a copy of our brain needs to be created to solve the desired tasks? And how will such chips interact with classic computing?

Almost all projects are currently undergoing testing, and before using them in smartphones and watches is still far away. And how scientists cope with difficulties, only time will tell.

PS Additional reading (new materials in our blog)

Source: https://habr.com/ru/post/321870/


All Articles