📜 ⬆️ ⬇️

Supercomputers: Third World Race

I have just returned from the 2011 PAVT conference and would like to acquaint the respected habrasoobschestvo with the current state of affairs in the field of high-performance computing.
If possible, I will try to refer to primary sources - namely, to articles from the magazine " Supercomputers " and conference materials .

Why is all this necessary


Supercomputers have traditionally been used for military and scientific purposes, but in recent years, revolutionary changes have occurred in their use, due to the fact that their power has "grown" to simulate real processes and objects at affordable prices for business.
Everyone probably knows that in the automotive industry, calculations on supercomputers are used to increase safety, for example, this is how the Ford Focus got its 5 stars . In the aircraft industry, the production of a new jet engine according to the traditional technology is an expensive pleasure, for example, the creation of the AL-31 for the SU-27 took 15 years, required the creation and destruction of 50 prototypes, and cost 3.5 billion dollars. For 6 years, the engine for the Dry SuperGenet , designed with the participation of supercomputers, was made 600 million euros and 8 prototypes were built.
It should be noted, and pharmaceuticals - most modern medicines are designed using virtual screening , which allows you to drastically reduce costs and improve the safety of medicines.
Further more .
Today in developed European countries:
47.3% of high-tech products are produced using simulation modeling of fragments of designed complex systems or products;
32.3% of products are produced using simulation modeling of small-scale analogues of projected systems and products;
15% of products are produced using full-scale simulation modeling of the designed systems and products;
and only 5.4% of the designed complex systems and products are made without simulation.

Supercomputer technologies in the modern world have become a strategic area, without which further development is impossible. The power of national supercomputers is just as important as the power of power plants or the number of warheads.
And now the world has begun

')

Race for Exseflop


Why exaflop? The fact is that there was an 11 year power increase cycle. Gigaflop, teraflop, petaflop ... Petaflop border was overcome in 2008, at the same time, the defense of the United States set itself the task - to reach in 1 1 of the level of the flop. Due to the importance of the task, all the leading countries join the race, so now the first place is occupied by the Chinese supercomputer .
Supercomputer computing has been at the forefront of the economic race for two reasons: firstly, they are in demand like never before, and secondly, the limit of extensive development has been exhausted by increasing the frequencies of processors while reducing their power consumption and nobody knows how to create this exaflop.

Unsolved problems so far:
1) Power consumption. Existing supercomputers consume megawatts of energy. Exaflops using the same technology will consume gigawatts, which is already comparable to the power consumption of a city. At the conference, it was already half-serious to suggest using supercomputers for central heating systems - so that power would not be lost.
2) Reliability. The more nodes, the less reliability, exaflop computers will break continuously and during their operation it should be continuously taken into account, programming technologies of fundamentally unreliable systems are in their infancy.
3) Efficiency. With the increase in the number of nuclei, the efficiency of their joint work continuously decreases. How to program exaflop systems with millions of parallel cores nobody even knows about some, we need new languages ​​and a change in programming paradigms.

In the 20th century there were two great races that determined in many respects the further development of civilization - atomic and cosmic.
Our generation has a race of computations.

Russia and supercomputers


Until recent years, the situation with supercomputers in Russia was a failure, in the top500 only one of the list of Russian cars shone. Now the situation is changing - there are already 11 of our supercomputers in the rating, and the best of them - “Lomonosov” from T-Platforms - is in 17th place in the current rating with a capacity of 500 TFlop (at the time of launch it was 11th). This year a large-scale modernization of Lomonosov is planned with the help of graphic accelerators, after which it should enter the top ten. Interestingly, we were only once on the first line of the rating, back in 1984, with the M-13 computer for the missile defense system .
The leader of the Russian market of supercomputers, T-Platforms, is now holding the first places in terms of layout density , which gives hope for good results in the current supercomputer race. The director of T-Platforms spoke at the conference and it was a brilliant presentation: young, competent, enthusiastic! I can’t say anything about our other player, RSC-SKIF company - the guys frankly ruined the plenary report and shot the next performance.
According to unconfirmed data, the first petaflop supercomputer was launched in March in Sarov, the city of our nuclear scientists, but it has not yet managed to get into the top500.
Now, considerable funds are being channeled into the sphere of supercomputers by the state, but money alone is not enough - new ideas and the work of thousands of people are needed. There are two news - good and bad. The bad thing is that almost all serious modeling supercomputer software was developed in the USA and the Americans are actively using their monopoly, distributing it, for example, our aircraft engines use for designing not specialized aviation, but affordable automotive software, which, of course, has much less possibilities. And the good news is that in the light of the coming mass parallelism, all the software will still have to be rewritten, which means that we have a very good chance of becoming a leader.

Processors


Because the growth of the frequency of silicon-based processors has stopped, the development of supercomputers is moving along the path of ever increasing parallelism, increasing the number of processing cores. However, this is not the only way to continue the search for a replacement for silicon. Long enough for the role of "savior" positioned created by "our former" scientists with scotch tape graphene , now there is a new candidate - molybdenite . But this whole business, if not very far away, but the future, but as long as an ever greater parallelism of the processor architecture is shining, 100 nuclear crystals are already being planned. Not just performance, but performance per watt of energy consumed becomes very important. In this connection, the AMD Bulldozer project is interesting, in which the cores use common blocks to reduce power consumption.
Intel is also not lagging behind, developing the Intel MIC architecture - a multi-core accelerator based on Pentium light cores.
Even a mobile-engineered Atom processor is now used in supercomputers .

Graphic Accelerators


NVIDIA was the first to succeed in applying technologies that have been developed to create powerful gaming graphics cards for parallel computing, and AMD did not stand aside with its FireStream accelerator. The use of graphic accelerators (GPGPU) makes it possible to obtain significant computing power ten times cheaper in terms of money and power consumption. In the top500, three of the first five supercomputers use NVIDIA TESLA accelerators. GPGPU is the only money-enabled opportunity to get a “personal supercomputer” of teraflops power in a regular desktop package . However, not everything is so cloudless in the Danish kingdom, programming for graphic accelerators, as I already wrote, is not an easy task . Also, there are questions what to choose - an expensive specialized Tesla or top -end graphics card, which is faster and cheaper? . In any case, alternatives to the set of “light” cores are not yet visible, which means that we will have to program more and more in parallel. Now we have a very interesting chance - programming of supercomputers can be mastered even on a netbook with NVIDIA ION, this has not happened in history yet.

Programming


Modern computer is a heterogeneous system. This is a cluster of nodes, each of which has several processors with shared memory plus several graphics accelerators, nodes are connected by a high-speed bus (now it's most often InfiniBand, there is even a “domestic” definition of a supercomputer as “something with an infiniband”).
The programming of this miracle of technology takes place with the help of a combination of MPI (distributing the task to the nodes), OpenMP (on processors) and CUDA / OpenCL (on graphics accelerators).
Optimization of the code in such a system (for example, the development of cycles) quickly leads to its complete unreadability, but without optimization, the efficiency is completely inadequate.
You can fight this only by creating new environments and programming languages. For example, Mosix OpenCL is being developed, which allows working with a cluster as one big person on an OpenCL compatible language. I found the development of NUDA presented at the conference interesting , which is a high-level “wrapper” over OpenCL in Nemerle language and allows you to automatically generate optimized code for graphics accelerators.

Participation


So, the race has begun and we have a chance to make history here.
What can be done now?
Study. Now available " Internet University supercomputer technology "
Participate in the competition for the effective use of GPU accelerators in solving large problems
What else?
I do not know yet. Write, find together.

Source: https://habr.com/ru/post/116733/


All Articles