
/
photo Tristan Schmurr CCBetween the May holidays, we published a post about the history of
1cloud , in it we tried to convey our
experience of choosing a direction for the development of an IT project . In addition, we talked about the
pros and cons of the virtual IT infrastructure in general and
how data centers are changing right now (today we will continue this topic).
')
According to IDC forecasts, the amount of data that humanity generates doubles (a
little about this in a different context) every two years and by 2020 will reach a volume of 44 zettabytes. All this “farming” needs to be stored and processed somewhere.
According to Emerson, already in 2011, there were 509,147 data centers in the world, which occupied an area of ​​27 square kilometers. Since then, these numbers have only grown. The volumes of accumulated data also increased.
In order to cope with the increasing "information com", companies offer new data storage technologies: they increase the capacity of hard drives, work on creating solid-state drives using flash memory and other non-volatile technologies.
In 2015, the Flash Memory Summit was held, where the public had the opportunity to
get acquainted with new technological solutions. Samsung Vice President Jim Elliott announced the start of sales of a new 256-gigabit 48-layer chip with cells storing three bits of information.
If you believe Elliot, then these chips have a reading speed that is twice the capacity of the old 128-gigabit devices, while consuming 40% less energy. A Toshiba concert also presented its development of a 256-gigabit chip with a multi-tiered crystal distribution structure. Here, the microelement transmits parallel incoming data from memory arrays directly to the microcontroller using through holes through silicon.
We should also mention two main technological trends of the conference. The first is the nonvolatile 3D Xpoint memory developed by Intel and Micron. According
to analyst Dave Eggelston (Dave Eggelston), the technology is based on elements with a phase change and a switch with memory on Ovshinsky elements, which allows you to get memory with a volume larger than that of DRAM, and performance exceeding the flash memory capabilities.
The second trend is called in-memory computing or in-memory computing. Summit participants noted that when switching to software that stores data in memory (for example, Spark), it makes sense to use DIMM for computations, and not CPU.
“If everything continues to move in this direction, then in the future 90-95% of the calculations will be performed inside the permanent cache,”
said Tegile Systems CEO Rohit Kshetrapal. Micron agrees with this statement. And this is not surprising, because the company is just working on Hybrid Memory Cube technology, where the processor / interface chip is included in the block of memory elements.
In the manufacture of chips, TSV technology is used, which involves the creation of copper channels in a multilevel structure, which act as conductors and connect memory modules located one above another. According
to the developers, compared to DDR3, the HMC provides a 15-fold increase in performance with 70% less power consumption.
In the near future, these technologies will radically change our understanding of the infrastructure of data centers.

/
photo by Roy Niswanger CCCooling data centers
Given the fact that modern chips emit a large amount of heat, and the density of their layout in modern servers is steadily increasing, companies containing data centers, and other large vendors are looking for ways to improve cooling efficiency.
Already there are storage devices whose task is to reduce the intensity of the flow of necessary cold air. One of these technologies is
helium hard drives. Helium facilitates the movement of mechanical components, so they heat less, and the HDD consumes 33% less electricity.
However, in addition to storage devices,
there are several tens of thousands of servers
installed in data centers, which also need to be cooled. One way to cool data center servers is to take in outside air "from the street." But, as practice shows, the use of such technologies is not possible everywhere.
The environmental
situation in China leaves much to be desired, which leads not only to problems with the health of residents, but also complicates the work of the IT infrastructure. In particular, the Chinese search engine Baidu ran into problems.
Three Baidu data centers are located in Beijing, which is notorious for its smog. Smog contains a lot of harmful substances (sulfur dioxide, nitrous oxide) and particulate matter, so this air is unsuitable for cooling the engine room and can cause an increase in hardware failures.
Baidu specialists
are working on cooling systems that do not require air outside the building. For example, work is underway on the technology Bottom Cooling Unit. In this case, heat exchange coils are located directly under the racks with the equipment, in contrast to traditional centralized fan systems; racks themselves are placed in special cabinets.
Thanks to this design, the cold air rushes straight to the "iron". Despite the fact that liquid is used for cooling, the placement of coils under the racks excludes the possibility of water flooding equipment in case of leakage.
In the direction of water cooling and the company looks eBay. But its goal is much more pragmatic, in contrast to its Chinese colleagues, to place more computing power per unit area.
In the machine room of the Phoenix data center, 16 rows of racks are installed, each of which accommodates servers with a total capacity of 30-35 kW. Previously, six blowing devices installed in each row, which occupied the working space, were used for cooling.
Recently, the company modified the hall, abandoning the air purge cooling in favor of water-cooled tailgate with Motivair refrigeration units. This allowed us to “recapture” the occupied six racks and put new servers in their place, increasing the computing power.
Motivair cooling doors are active modules with their own high-performance electronically-switched fans that increase the air circulation in the rack through the cooling coils. Moreover, the system can work with warm water, the temperature of which reaches 15 ° C (in other refrigeration systems, this temperature is 7 ° C).
And here the question arises, is it really necessary to approach the cooling of data centers so seriously? After all, it is still not clear at what level it is necessary to maintain the temperature in them. Most companies set the temperature recommended by the suppliers of the equipment used, but it is not clear how its increase affects the performance of the systems.
A group of scientists from the University of Toronto conducted a
study to determine how to manage temperature in data centers. It is believed that when the temperature in the engine room increases, the performance of the servers decreases. Indeed, when the temperature reaches a critical point, the processor enters the throttling mode (trotling), and the speed of rotation of the coolers increases, which leads to additional power leaks and increased power consumption.
However,
increasing the temperature by only 1 degree reduces energy consumption by 2-5% and saves a huge amount of money for electricity.
Scientists have collected data on three models of hard drives installed in Google data centers, and determined that there is a linear relationship between the probability of LSE and temperature, and not an exponential dependence, as is commonly believed (standard models, for example, a model based on
the Arrhenius equation, suggest that for every additional 10-15 ° C there is a doubling of the number of failures). A similar conclusion holds for RAM.
It turns out that the temperature affects the reliability of the equipment much less than expected. But now, taking into account the results of the study, it may make sense to slightly raise the temperature in the engine room.
For this reason, some corporations and large companies use the cooling method, called hot water cooling. All the same company eBay has deployed Dell modular containers on a data center roof, cooled by a water circuit with a temperature of 30 degrees Celsius, which also keeps the temperature of the servers in a safe working range.
A similar solution was
used by IBM in a supercomputer for the Swiss State Institute of Technology Zurich (ETH Zurich). It uses the Aquasar cooling system, consisting of copper pipes with hot water, connected to the radiators of the computing elements.
The water temperature in the tubes is 60 ° C, and the temperature of the processor radiators is 85 ° C. Passing through the circuit, the water is heated to 65 ° C, then it cools back to 60 ° C in a heat exchanger, after which the process is repeated. The whole cycle takes about 20 seconds.
According to IBM, compared to air blowing systems, where water with a temperature of 7-10 ° C is used for air cooling, this water cooling system consumes 40% less energy. It is also worth noting that the water from the pipes can be used to heat the floors and walls in the adjacent rooms.
PS We try to share not only our own experience on the service of providing virtual infrastructure 1cloud , but also to talk about related areas of knowledge in our blog on Habré. Do not forget to subscribe to updates, friends!