📜 ⬆️ ⬇️

Fujitsu will build a supercomputer for the study of artificial intelligence

Last week, the Japanese National Institute of Advanced Industrial Science and Technology (AIST) selected Fujitsu to create the ABCI (AI Bridging Cloud Infrastructure) supercomputer. It will become a platform for research in the field of artificial intelligence, robotics, autopilot vehicles and medicine.

It is assumed that ABCI will perform operations with double precision at a speed of 37 petaflops. The computer will start in 2018 and will be the fastest in Japan.


/ Flickr / NASA Goddard / CC
')

What's inside


The system will use 1,088 Fujitsu Primergy CX2570 servers, each with 2 Intel Xeon Gold processors and 4 NVIDIA Tesla V100 graphics processors. To speed up local I / O, the supercomputer will be equipped with NVMe-cards Intel SSD DC P4600, which work using 3D NAND technology.

The NVIDIA Volta architecture and the Tesla V100 accelerator require liquid cooling because they heat up more than other elements during operation. To remedy the situation, Fujistu uses a controversial approach - hot water cooling.

This method helps operators use less or no chillers. In 2015, Fujitsu announced that Primergy servers had halved the cost of cooling. The ratio of PUE servers was 1.06. This was achieved with the help of the direct-to-chip liquid cooling technology.

Another ABCI solution is cooling at the rack level: 50 kW of liquid cooling and 10 kW of air. The processors are equipped with cooling blocks to maintain the temperature and remove excess heat. You can find the cooling scheme here .

Where to put


The ABCI project budget is $ 172 million. Ten million of this amount will be used to build a new data center for the system, which will be located on the campus of Tokyo University. The maximum power of the data center will be 3.25 MW, and the capacity of the cooling plant - 3.2 MW. The floor in the data center will be of concrete. The starting number of racks is 90 pieces: 18 for data storage and 72 for computing tasks.

The construction of the data center began this summer, and the ABCI system itself will be launched in 2018.

Who has more petaflops


Creating supercomputers is like an arms race. The Chinese Sunway TaihuLight is considered the fastest supercomputer - its performance is 93 petaflops.

It is followed by Tianhe-2 , a supercomputer from China with a capacity of 34 petaflops. In third place is Piz Daint from Switzerland and his 6.3 petaflops. They are followed by American Titan (17.6 petaflops), Sequoia (17.1) and Cori (14). Closes the top 7 Japanese Oakforest-PACS , with a capacity of 13.5 petaflops.

Russia ranks 59th in this ranking. The country is " Lomonosov-2 ", with a capacity of 2.1 petaflops.

Supercomputers are used for different purposes. With the help of the most powerful of them, scientists have built a virtual model of the universe. Tianhe-2 protects top-secret data of the PRC and the state itself. One of the applications of Piz Daint is modeling in high-energy physics.

The US National Nuclear Security Administration uses Sequoia to simulate nuclear explosions, and other scientists use cosmological and human heart simulations.

With the help of Titan conduct research: create models of the behavior of neutrons in a nuclear reactor and predict climate change. Oakforest-PACS is used in Japan to research and train students who are interested in supercomputers.

The era of Exaflops


In 2018, Summit will be launched in the USA - a supercomputer with a capacity of 200 petaflops. Chinese scientists will answer this with Tianhe-3 - its performance will be equal to one exaflops. The prototype of this supercomputer will appear in 2018. In 2020, France will join the race: Atos plans to launch the Bull Sequana ex-flop project.

However, experts note that a massive transition to exaflops will lead to too much power consumption and excess heat. In order to operate with exaflops, the world community will have to coordinate changes in the entire ecosystem of computers: in hardware, software, algorithms and applications.

Many are already switching to solar energy and advanced cooling systems. But for the mass distribution of powerful supercomputers, this will not be enough.

According to Horst Simon (Horst Simon) from the National Laboratory. Lawrence Berkeley, the difficulty is that we have to make a number of scientific breakthroughs at the same time. First you need to understand how to reduce energy consumption and not to waste excess heat. Only after that can we compete.

PS A few more materials on the topic from our blog:

Source: https://habr.com/ru/post/338484/


All Articles