📜 ⬆️ ⬇️

IBM used in-memory computing for machine learning

Researchers from IBM Research have demonstrated the successful operation of a machine-less learning algorithm running on PCM devices (phase-change memory). This method was 200 times faster and more energy efficient than traditional calculations using the von Neumann model. According to IBM, the technology is suitable for creating high-density, mass-parallel, low-power systems for use in AI.


/ Flickr / IBM Research / CC

In memory calculations, the same device ( phase transfer memory - PCM) is used for both storage and data processing. Abu Sebastian (Abu Sebastian), a researcher from IBM Research, believes that this approach is similar to how the brain works.
')
With traditional calculations using the von Neumann model, data is processed and stored on different devices. The constant transfer of information from one device to another adversely affects the speed and energy efficiency of calculations.



Experiments


In their experiment, the IBM team used a million phase memory modules based on doped GeSbTe. Under the influence of electric current, this alloy changes the structure from crystalline to amorphous. The amorphous form does not conduct well. Therefore, by varying its volume with respect to crystalline, it is possible to represent states more complicated than binary ones.

Due to this phenomenon, it was possible to implement a machine learning algorithm for finding links between unknown data streams. The researchers conducted two experiments:

On simulated data. There are a million random binary processes that make up a 2D grid superimposed on the black and white image of Alan Turing at a resolution of 1000x1000 pixels. Pixels blink at the same frequency, but black lights up and fades in a weakly correlated manner.

It turns out that when a black pixel blinks, the likelihood that another black pixel will also flash will increase. These processes were connected to a million PCM devices and launched a learning algorithm. With each blink the probability of "guessing" increased. As a result, the system reproduced the image.

On real data. Scientists took data on rain in the US for 6 months from 270 weather stations with an interval of 1 hour. If it rained for an hour, it was marked as 1, otherwise - 0. Scientists compared the results of their algorithm with the results of the k-means method. Algorithms should establish a correlation between data from different weather stations. The results of the methods came together in 245 stations from 270.


/ Flickr / IBM Research / CC

While this is only an experiment in the laboratory, but scientists consider it very important. According to Evangelos Eleftheriou, this is a big step in research into the physics of AI, and the architecture of computation in memory will overcome the limitations in computer performance.

Among the shortcomings of the solution, researchers call a possible decrease in the accuracy of calculations. Therefore, it is planned to use the system in areas where 100% accuracy is not needed. An IEDM conference will take place in December 2017, at which IBM Research will present one of the applications of in-memory computing.



PS Other materials from our corporate blog:

Source: https://habr.com/ru/post/341568/


All Articles