I wrote 4 things to write this publication:
1. Publication
“Why is it important to maintain temperature in the server. As usual, server cooling is arranged , in which the author attempted to perform an honorable and very difficult mission to clarify the need to cool the server.
2. Errors read in this post.
3. Own experience.
4. The number of informative articles on the infrastructure of the data center that tends to zero (I mean that server = small data center) on Habré. Although "Beeline" and the guys from "TSODy.rf" in this regard are great fellows.
So, what will this publication.
First of all it will be a small excursion into the theory of server cooling. In the second place, I will try to deal with the main misconceptions when planning the cold supply. And in the third - analysis, where all the same it is worth investing money, and what can be waived.
Today in the cold data center there are two global strategies:
')
1. Free cooling. This is when the servers are cooled directly by the outside air with minimal preparation (usually this is basic filtration and heating in winter).
2. Controlled cold supply, let's call it that. This is when you prepare the air according to the indicators of pollution, humidity, temperature and feed it into the server. This also includes various methods of indirect free cooling (external air is used to cool the heat exchanger in which there is air from the data center).
The advantages of the first strategy are obvious. This is a low cost of implementation, low cost of maintenance, ridiculous energy bills. The disadvantages are also clear. Uncontrolled humidity and dustiness of air, which inevitably leads to the failure of server components. This approach has its followers. These are usually very large technology companies. Why is it good for them and bad for others? There are 3 reasons:
1. A network of fully reserved sites. If a failure occurs on one, the second will pick up.
2. The need to be at the peak of technology. A server working with bad air will fail in about a year. For the year, these companies will change will change the server fleet by a third. They do not need to take care of iron, which in a year will go to the garbage.
3. Volumes and bills for electricity. Cooling is the most expensive item in electricity bills. Reducing cooling costs by 1% will save them several million dollars. What to say about the reduction of 30-50%. And they are willing to endure some inconvenience.
The second strategy implies greater reliability and long service life of the cooled equipment. The most traditional example is the banking industry. Well, all other companies that do not change the server like a glove. The disadvantages of this strategy are price, price, price. Construction, maintenance, electricity.
It is clear that most companies are considering the option of "the most functional and without frills." However, it is simply not always easy. It happens simply and correctly, but it happens quite the opposite (I felt like a boxer myself).
Let's move on to more practical things. When they talk about cooling servers, first of all they mean temperature control. This is true, but not enough. The three pillars of proper cooling are temperature, air volume and humidity. The second tier is air flow control, that is, how to send cold air to where the server will take it and how to take hot air from the server’s “ejection” and send it to the air conditioner. And how to do this so that the hot and cold air does not mix.
The temperature is simple. There are recommendations from the manufacturer of the server, there are recommendations from ASHRAE. The normal temperature for most server, I think 22-24 degrees.
If we all remember the temperature, then practically nobody who builds the server room thinks about the volume of air. Let's look at the technical parameters of the server. In addition to consumption, size, etc. there is a parameter, usually measured in CFM (cubic feet per minute) - this is the volume of pumped air. That is, your server needs air of a certain temperature and to a certain extent. Thick font with caps "in a certain amount." Here we immediately turn to the possibility of using household split-systems in the server. The thing is that they will not cope with the required volume. The fact is that the specific heat release of a person is incomparably small compared to the server, and household air conditioners are designed specifically to create a comfortable climate for humans. Their small fans (like the forelimbs of a tyrannosaur) are not able to drive through the volume of air needed to cool the server. As a result, we get a picture when the server drives the air through, the air conditioner cannot pick it up and the hot air mixes with the cold one. You, for sure, have been to the server room where the air conditioner gives out +16 degrees, and in the room +28. I have been. Maybe your server is exactly like that?
Well, so as not to get up two times:
1. Household splits are designed for work 8/5, and server works 24/7. Split will develop its resource in a year and a half.
2. Splits do not know how to supply air of the required temperature to the server, they know how to throw out the air of the required temperature, and it’s all the same to them (they are bastards).
3. They are too close located intake and discharge of air, which means that hot and cold air will inevitably mix (and here, see paragraph 2).
4. It is very difficult to get the splits to work in accordance with the readings of the temperature sensors (and here again see p. 2).
In general, do not use household splits. Do not. In the long run, a good precision air conditioner will be cheaper than a split.
As for moisture control. In the article mentioned at the beginning there is one wrong message. Humidity needs to be controlled, this is certain. But you just need not to dry, but to humidify the air. The fact is that the server room has a closed air exchange (at least it should). And the amount of moisture in the air at stage 0 (start server) there is within certain limits. During the cooling process, most of the moisture condenses on the heat exchanger of the air conditioner (the temperature difference is too high) and is discharged into the drain. The air becomes too dry, and this is static on the boards and a decrease in the heat capacity of the air. Therefore, buying a productive humidifier and a water treatment system for it will be a good waste of money.
The moment associated with the management of air flow. In most cases, the fan units in the cabinets are absolutely useless. They pull the air from the bottom up, and the server pulls it from front to back. What you need to do is to throw out the fan blocks from the estimate and put the plugs on the empty units in the cabinet. Although you are blocking the boards, but close all the holes through which air from the back of the cabinet can get into the front. Passive air control methods work better in most cases than active ones. And they are cheaper.
Microclimate monitoring. A very important point. Without monitoring, you will never know what works for you differently. It is necessary to monitor both temperature and humidity. Humidity can be monitored at the point farthest from the humidifier, since this indicator is the same for any point in the room. But the temperature must be monitored on the front door of the cabinet. If you do not use the distribution of cold air from under the raised floor, then just one sensor per cabinet. If you distribute air through a raised floor (it is clear that we already use the right air conditioners), then the correct strategy will be to monitor the air at different levels from the floor (for example, 0.5 m and 1.5 m). Well, it would not be superfluous to mention that in the server room you can never, under any circumstances, install cabinets with glass / solid doors. Air must pass freely through the cabinet and the server. If you suddenly have such cabinets - remove the door from them.
As a summary:
1. Do not use household splits - they do everything wrong.
2. Manage humidity.
3. And the air flow.
4. Install plugs on unused cabinet units.
5. Use cabinets with perforated front and rear doors. If you do not have such, remove the door at all. Well, or drill into your hands.
6. Properly position the sensors of the monitoring system. Measure the temperature at the front of the cabinet, humidity - in any part of the room.
7. Remove from the server battery heating. They not only heat, but also water sometimes.
8. Clean the windows. Windows are heat leakage and the easiest way to the room, bypassing the armored door to the server and five security posts.
9. Make normal hydro and steam and thermal insulation of the room.
10. Tools are secondary. There are a huge number of cooling and monitoring solutions. The main thing is to understand what is primary for you today, but there will be a tool.
11. Accept the fact that today IT is not only “patching kde under free bsd”, VM and DB, but also such far-away things like energy, refrigeration, physical security and architecture.
Good luck to you in the field of building the right infrastructure.