Often, companies are faced with the need to install new, more powerful equipment in existing facilities. Sometimes it is not easy to solve this problem, but there are a number of standard approaches that help to fulfill it. Today we will talk about them using the example of the Mediatek data center.
MediaTek, a world-renowned manufacturer of microelectronics, has decided to build a new data center at its headquarters. As usual, the project had to be implemented as soon as possible, as well as to ensure compatibility of the new solution with all existing equipment. In addition, the power supply and cooling facilities had to be initially adapted to the conditions of the building in which the new data center was supposed to start work.
The company’s CIO received a request for automation and data center monitoring technologies, and the customer welcomed the introduction of energy-efficient solutions in the field of cooling and power supply. That is, an additional budget was allocated for these technologies, which made it possible to create a truly high-performance data center in the given conditions.
Before starting the project, it was necessary to study the features of the equipment being placed - and it was really powerful. In the new data center, it was planned to install 80 racks, some of which involved placing a load of 25 kW.
The load placement modeling and analysis of possible cooling schemes were carried out, after which it was decided to divide the data center into functional zones. The high-load zone, where the most powerful equipment is located, was separated, and for cooling and power supply it was decided to install the most powerful and technologically advanced systems, including RowCool in-line air conditioners.
The zone with an average density of location, which mainly included network switching equipment, storage systems and auxiliary servers, was also located separately. Given the lower energy output by the racks, it was possible to create a longer “hot corridor”, which means to save usable area.
We simulated the movement of air and estimated the permissible temperature parameters for both zones, calculated the power of the equipment and the acceptable dimensions of the corridors, as well as the parameters for placing the equipment in racks.
Simulation of air movement helped to find the optimal points for the placement of RowCool in-line air conditioners, so that the combination of active cooling and the separation system of hot and cold corridors gives the maximum effect.
Modular load sharing systems for both zones were designed and installed. As a result, the high-load zone received shorter corridors and more RowCool air conditioners than the medium-load zone.
In-line air conditioners were connected to the chillers using water cooling. To ensure the safety of such a system, dozens of sensors were installed in the data center, and areas for detecting possible fluid leaks were identified. If at least one drop of water appears, the system immediately issues a notification and helps to correct the situation.
Moreover, RowCool air conditioners located in the high-load zone are connected in groups, and autonomous interaction is configured between them. This is done so that if one air conditioner fails, others can strengthen their work and provide sufficient cooling, taking into account the work of the “cold corridor”, while the air conditioner is repaired or replaced. For this, in-line air conditioners are also installed according to the N + 1 scheme.
Based on proven practice, we placed backup batteries, as well as UPS systems in a separate area, so that the air flows do not mix and the cooling systems do not lose power on those loads that especially do not require additional cold.
Considering that the total capacity of the entire data center exceeds 1,500 kW, the power infrastructure and the UPS zone had to be carefully designed. Modular UPSs were installed taking into account redundancy according to the N + 1 scheme, and for each rack, ring power was supplied - that is, at least two power cables. In the monitoring system, at the same time, power consumption, voltage and current were monitored to instantly notice any abnormal change.
In the high-load area, power distribution cabinets (PDUs) were installed on the back of the Delta racks, and additional 60A distribution modules were placed on top.
In an area with an average load, distribution cabinets installed above the racks were dispensed with. This approach allowed saving money without sacrificing quality.
The new data center has implemented equipment management systems. So, through the DCIM InfraSuite system, you can track all the equipment and its location in the data center, as well as all the power parameters for each individual rack.
A sensor with an EnviroProbe indicator was also installed in each rack, data from which is collected on EnviroStation hubs in each row and transmitted to a central management server. Thanks to this, data center controllers can constantly monitor the parameters of air temperature and humidity in each rack.
In addition to power control, the InfraSuite system also allows you to plan the filling of the data center, because the system contains data on the number and power of installed equipment. Engineers can plan the installation of new servers or switching systems while redistributing power through smart PDU cabinets.
The practice of building a data center for MediaTek was interesting in that we had to place a lot of high-performance load on a fairly small area. And instead of distributing it throughout the room, it turned out to be more efficient to isolate high-power servers in a separate zone and equip there more powerful and technological cooling.
A comprehensive monitoring and control system allows you to constantly monitor the energy consumption of high-power servers, and redundant cooling and power elements help prevent downtime, even in the event of equipment failure. It is these data centers that need to be built for the critical business processes of modern companies.
Source: https://habr.com/ru/post/461361/
All Articles