📜 ⬆️ ⬇️

Dell Project Triton Servers and their Cooling System

Data center engineers are well aware that fluid is a more efficient method of cooling equipment than air. This solution is very popular and has demand in the field of supercomputers for chip cooling and not only. But, since technologies are not standing still and the density of components in conventional server platforms is growing - this technology has begun to be used in ordinary data centers. It is used by e-commerce giant eBay .



The development was carried out by the Extreme Scale Infrastructure (ESI) division of the US corporation Dell , which manufactures custom equipment for the largest data centers around the world. One of these was the unique development of a liquid cooling system (LSS) for server equipment for Ebay.
')


In LSS, the main element is water. It is worth noting that the Dell system has its advantages: it moves water directly from the cooling towers of the data center to the heat exchangers on each chip inside the chassis of server systems. The design does not provide for central distribution units of the working fluid, which, as a rule, are located between the cooling tower and server racks in supercomputers and data centers with liquid cooling. Another feature of the project, codenamed “Project Triton”, is the use of warmer water compared to a conventional liquid cooling system.



The system developed by Dell is already used in one Ebay data center, where the water inlet temperature reaches 33 degrees Celsius. This is not the limit for LSS and at the moment heat exchangers use water on hot processors, which operate at a very high clock frequency and are characterized by increased heat generation. If Project Triton were used in tandem with low-power processors, the temperature of the supplied water could reach 60 degrees Celsius.

EBay processors need more efficient cooling, providing a modified version of chips from the Intel Xeon E5 v4 family. Each chip in the basic version includes 22 physical cores and can simultaneously execute up to 44 command streams. The amount of cache memory is 2.5 MB per core, that is, a total of up to 55 MB. Depending on the number of cores, the TDP will be between 55 and 145 watts. In equipment Broadwell-EP can be noted the presence of up to 40 lines PCI-Express and four-channel controller DDR4. Processors will be designed for installation in the socket R3. Also, eBay and Intel engineers have modified the chips and now they can operate at a higher frequency. By analogy, for several years, Intel has been developing chips for other hyper-scale data centers, such as Facebook and Amazon.



Project Triton system covers not only single-circuit LSS. It is a rack-scale solution: all components that are included in the rack (including heat exchangers, PDUs, and servers) are designed integrally as a single system. This approach is used by Dell engineers when developing specialized solutions for hyper-scale data centers.

Data center equipment vendors traditionally focus on the development of individual, self-sufficient products — no matter if they are servers, storage systems, or network switches. When creating a rack-scale solution, you can more rationally use the valuable resources of the power supply and cooling system, as well as the in-rack space and network connections. The listed resources can be more rationally divided between compute nodes to increase the overall efficiency of the data center.

Project Triton uses a 21-inch server rack, which is very similar to racks designed by Facebook engineers as part of the Open Compute Project initiative. The water in the system circulates through copper pipes connected to the heat exchangers above the processors inside each server case. The absence of additional distributors of the working fluid and pumps allows for very low energy consumption. The energy efficiency ratio (PUE) in a data center with a Project Triton system is only 1.03 (according to Dell).



According to Dell, Project Triton uses 97 percent less power to cool servers compared to the average air-cooled data center system. In addition, the new product uses 62 percent less electricity than the Apollo 8000 high-performance water-cooled computing solution from a competing HPE (Hewlett Packard Enterprise), which was previously written in an article

To protect data center operators from possible leakage of water to other elements of electronics, Dell engineers conducted a test on high-pressure welded joints of pipes using simulation modeling. Thus, the pressure in the pipe rose to 350 PSI (during normal operation, the water pressure in the pipes during flow is 70 PSI). In addition, each element in the server rack and in the server itself is completed with additional leak detection mechanisms and emergency shutdown devices. Also, Dell engineers use sealed limiters to eliminate the ingress of liquid on the server’s electrical components using military technology.

Source: https://habr.com/ru/post/395261/


All Articles