A survey conducted by one of the leading manufacturers of cooling and air-conditioning systems for data centers, by Liebert, showed that 66% of respondents maintain temperature in the data centers no higher than 21 degrees C, and none above 24C.
Horizontal temperature - in American "Fahrenheit".
CRAH is a Computer room air-handler, in our opinion - air conditioning.In this case, the recommendations of ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers), revised last year, recommend an upper limit of the working range of 27C
at the entrance to the server .
This state of affairs is well confirmed by my “observable practice”.
Traditionally, owners and operators of data centers are guided by the rule “the colder - the better for electronics”, of course, until dew or snow starts to fall in the hall;)
It is considered that every extra degree above, at least, 21C with seven-mile steps brings the death of the components of the server hardware from overheating.
Often this is taken for granted that it is not even discussed.
However, the modern trend on “Green IT”, or on energy-saving technologies in data centers, could not pass by the question of the actual needs of server equipment at the optimum operating temperature.
The results of the study question may seem rather unexpected. Everything indicates that the current situation with temperature in data centers stably below 22 degrees is not optimal either in terms of equipment operation, or in terms of component life, and the danger of high temperatures is sharply overestimated.
')
Thus, an article in The Register goes under the traditional heading for the Register: "
Intel says data centers much too cold - Frozen assets a waste of cash" - "Intel claims that data centers are too cold - frozen assets are wasting money."
Nevertheless, the theme raised is not at all "yellow."
A traditional industry reassessment of the danger of high temperatures for data center equipment is confirmed, for example, by a
recent study by Intel , in which 896 identical blade servers were divided equally into “control” and “test” groups (8 cabinets, 4 blades chassis each, 14 blade servers in each chassis, totaling 448 in each site). The “control” was cooled in the traditional way with the help of a closed-circuit air conditioner, and the subject was cooled with the usual “outboard” air, using an “open” circuit, with minimal dust removal and without moisture control.
The aim of the study was to prove the possibility of building cost-effective data centers and reduce the cost of cooling (it’s no secret that in the costs of a modern data center electricity costs are quite significant. The cost of powering air conditioners can be from a quarter to half in total power consumption)

Source:
http://dashboard.imamuseum.org/node/830Vertical - kilowatts, horizontally - the days of December. It is winter in the yard, and cooling costs are apparently minimal.
The graph does not apply to the test of Intel and is given only as an illustration of the typical correlation in power consumption of the data center.Also:

Source:
http://www1.eere.energy.gov/femp/program/dc_energy_consumption.htmlIn the case of the Intel experiment, despite temperature fluctuations in such an “unconditioned” data center, sometimes rising to 32 degrees, the level of failures compared with the control group differed slightly more than half a percent (4.46% versus 3.83% for “traditional” data centers Intel, on average, and 2.45% of the control group, which, in general, is within the “statistical scatter”).
An even more interesting situation develops with the dependence of failures on temperature for hard drives. For example, in 2007, a
report was published
by Google engineers who investigated the frequency and causes of hard drive failures in the server centers of their company (about 100 thousand drives were processed, and the study lasted about nine months).
One of the interesting results of the study indirectly confirms the ASHRAE recommendation regarding the temperature conditions in the data centers. So, according to the observations of Google researchers, the probability of failures of hard disks increased sharply when their temperature dropped below 30 degrees, and the lowest probability of failures for the observed group of disks corresponded to a temperature of as much as 40C!
At 40 degrees of operating temperature (all measurements were performed using SMART sensors) the probability of failure did not exceed 1% AFR (Annual Failure Rate, annual number of failures), an increase to 50C increased the AFR by half, to 2% (no higher temperatures were observed in the data center) .
But lowering the temperature to 20C paradoxically increased the probability of failures by almost tenfold, to 10% AFR!
In the graph, histogram bars show the relative number of disks having a particular temperature, points with T-shaped bars — the AFR values for a given temperature and its statistical variation, which increases with a decrease in the number of “participant” disks with a given temperature.A noticeable increase in the number of failures at elevated (significantly!) Temperature was observed only for discs with an age above three years.
Findings:It is possible that the “colder - the better” approach has become obsolete. This paradoxical, at first glance, conclusion is confirmed by some statistical results, which indicate that, perhaps, we underestimate the “elasticity” of the temperature regime of modern equipment, and the ability to tolerate its “elevated” (in our opinion) operating temperatures.
In addition, each degree, which will be able to raise the temperature in the data center, there is a direct savings in electricity bills.
First published on the blog
http://proitclub.ru/ .