📜 ⬆️ ⬇️

New processor architecture solves the problem of combinatorial optimization

The speed of computers is constantly increasing, transforming the world around us. Information technology helps humanity to cope with many difficulties in various fields. Some of the real problems can be solved using combinatorial computer systems. However, as the number of analyzed factors increases, the number of combinations grows, and a “combinatorial optimization problem” arises. In order to solve it, it is necessary to significantly increase the speed of operation of calculators, which by now has become extremely difficult to accomplish without cardinal processing of the fundamentals of building the corresponding equipment. This is what the company Fujitsu.



Gordon Moore's law, which assumed a doubling of the number of transistors in semiconductor chips every two years, over the past 50 years has been a basic rule in the design and manufacture of microelectronics. The unrestrained increase in the performance of computing systems has provided mankind with certain benefits, but today experts do not see the conditions for the continuation of this law.

If we want computers to significantly improve our quality of life in the future, the IT industry must take a number of measures, including the development of fundamentally new devices, such as quantum computers. Such systems, which are based on a method called “quantum annealing,” were developed specifically to solve the problem of combinatorial optimization.
')
Although quantum computers available today are generally capable of solving this problem, there are limitations to their capabilities associated with the connection of adjacent elements. Therefore, manufacturers of computing equipment need to create a new computer architecture that can quickly and efficiently handle a large number of combinations made up of existing factors of the real world.

New processor architecture


The University of Toronto and Fujitsu have jointly developed a new computer architecture to find optimal solutions to practical problems by analyzing a huge number of combinations. The architecture uses CMOS technology, extending its field of application.

New development has two distinctive features.

1. First of all, it is a “non-von-Neumann” architecture, therefore it minimizes the amount of data movement. Almost all modern computers use “von Neumann processing”. This means that the program data is stored in memory, and then operations with them are performed sequentially. In recent years, computer performance has increased significantly, and the low speed of reading instructions from memory has become a bottleneck. As a result, the possibility of switching to “non-von Neumann” data processing looks more attractive. Such architectures are used in neurocomputers (created by the type of neural circuits), quantum computers that use the principle of the behavior of elementary particles of quantum mechanics, and DNA computers.


Technology to minimize data movement using non-von Neumann operations

Due to the use of "non-von Neumann" data processing (program execution) and updating the optimization variable (bit), the cost of solving the problem is reduced. In this case, the data is first loaded from memory, then, as necessary, they are optimized, and then the finished result is issued. Since during operation, data is not read or written to memory, time and energy costs are reduced. In addition, by minimizing the movement of data between basic circuits, their minimum volume must be moved to the upper levels.

2. The second distinguishing feature is the use of “high-speed technology in the basic optimization scheme”. In this scheme, probability theory is used to search for the most optimal state. The probability of determining the optimal state of an object increases due to the parallel calculation of the value of each operation for several options. If the search stops in the middle, this method adds a constant value to the total energy consumption cyclically in order to increase the likelihood of exclusion for the next state. This approach allows you to quickly get the best solution.


Technology acceleration of the main optimization schemes

Fujitsu has tested the new architecture and created a basic optimization scheme based on a programmable logic integrated circuit like FPGA (i.e., a circuit whose configuration can be customized directly by the customer) capable of handling 1024 bits. The new system is about 10 thousand times faster than a similar one running on standard-processor processors (x86).

The use of several basic circuits that perform optimization in parallel, allows us to solve a wider range of problems compared to the quantum computers currently available. Also, the scale of solved problems and the speed of their processing increases. This allows, for example, to perform optimization of several thousand physically distributed databases. Fujitsu will continue to work on improving the new architecture and plans by 2018 to create an experimental system with scaling from 100 thousand to 1 million bits.

Source: https://habr.com/ru/post/327860/


All Articles