📜 ⬆️ ⬇️

What we know about high density servers

- RLX Technologies

Comments to some posts of habr made you wonder if people have an understanding about high density servers and their capabilities. The purpose of writing this post is to introduce certainty on this issue. It is also planned that this post will be the first in a series of articles on HPC ( high performance computing ).

High-density servers are most in demand in the construction technologies of cluster-type supercomputers, virtualization systems and cloud organization, parallel access systems to storage systems, analytical calculation systems, search engines, etc. Their use is primarily due to the inability to fulfill all the requirements using other technologies. . Consider solutions, their pros and cons.
')

Blade Server


In the West, server placement in data centers has long been lacking. It is therefore not surprising that high density servers first appeared there. The pioneer was the company RLX Technologies, which in 2000 developed a system that fits 3U 24 blades. The main customers of these first Blade servers were fighters and NASA. Further, this startup was purchased by HP. But the most important thing was done - a high density server was created.

Giants followed the pioneers: Intel, IBM, HP. Next - DELL, SUN, Supermicro, Fujitsu, Cisco, HDS, etc.

The main differences between Blade-systems and RACK-servers, in addition to high density, is the integration of servers with related infrastructure: networks, monitoring, control, cooling and power supply. All this is located in one box and, if possible, has elements of fault tolerance. The unifying element is BackPlane - a motherboard, usually passive. All elements of the Blade system are connected to it. The space occupied in the cabinet varies from 3U to 10U. The most high-density solutions are HP Blade and DELL PowerEdge - 3.2 servers per 1U. Almost all manufacturers make servers only on x86 / x64 processors. But there are also solutions on RISC, MIPS and ARM processors.

By the way, in the RLX Technologies solution the density of servers was higher. This is due to the fact that it used Celeron single-core processors, which are now used mainly only for desktop thin clients. It is clear that the heat dissipation of modern processors is much higher, and this is not yet possible to increase the density in modern solutions.

What are the advantages of blade servers? Let's highlight the main points:
  1. Everything is located in one chassis.
  2. The monitoring and control system has advanced functions in comparison with RACK-servers.
  3. The presence of several types of networks in each server blade. These can be: Ethernet (100Mb / s, 1Gb / s, 10Gb / s), FibreChannel (2Gb / s, 4Gb / s, 8Gb / s, 16Gb / s), InfiniBand (SDR, DDR, QDR, FDR).
  4. Built-in cooling and power supply elements have fail-safe features.
  5. Hot swap of all replaceable components.
  6. Ability to organize built-in disk storage for all installed blade servers.
  7. Density placement in the closet.

What are the weaknesses? The main disadvantages that seem to me significant:
  1. The high price of an incomplete set. Only when a fill of about 70% is achieved, do we get close prices with RACK'S analogues.
  2. Restriction on expanding server blade configurations.
  3. The impossibility of rejection server-blade as an independent unit.
  4. Limited simultaneous use of network interfaces.
  5. The impossibility, in some cases, of organizing a non-blocking network between server blades and the outside world.
  6. Restriction in the use of components on a thermopacket (for example, the highest-end processors cannot be installed due to overheating).
  7. Proprietary technologies. By purchasing equipment from one manufacturer - you will only buy from him.
  8. Increased requirements for engineering infrastructure (power supply and cooling).

Consider the structure of the Blade system on the example of a solution from Dell. This is the Dell PowerEdge M1000e.

Dell PowerEdge M1000e

Server blades can have from two to four processors. Depending on the number and type of processors in one chassis, you can install from 8 to 32 blade servers. Each blade server can have 1GbE, 10GbE, 8Gb / s FC, IB DSR, DDR, QDR, FDR interfaces. The base ports are 1GbE.

Depending on the size of the blades, the number of installed mezzanine interface modules can be one or two. Each of the mezzanine modules can have four 1GbE ports or two ports of any other interfaces.

For the organization of the fault-tolerant scheme in the chassis, the switches are installed in pairs. It is possible to install three pairs of switches. Each pair must consist of identical switches. Accordingly, there may be various combinations:

Also for fault tolerance, two remote monitoring and control modules are installed. These modules allow you to remotely control any blade. From powering on, BIOS setup, boot source selection, OS installation both from internal media and local administrator’s media to providing full remote access to KVM.

One of the download options is to download from the SD card. There are two such cards in the blade and you can boot from any. It is also possible to combine them in the mirror.

The only module that does not have redundancy is the KVM module. But the failure of this module does not abolish connectivity and control over the network.

When using blades M420 server density per 1U is 3.2 servers.

M420

TWIN server


The density alternative to the existing Blade systems is their younger brothers - TWIN. This technology was developed by Intel and transferred to Supermicro in 2006 for market promotion. The first TWIN servers appeared in 2007. It was a 1U's two-server construct with a single power supply, where all switching connectors were brought to the rear of the servers.

Blade-     – TWIN

This layout has gained recognition over these six years, and the line has expanded greatly. Now available are 1U, 2U and 4U TWIN servers with the ability to install from 2 to 8 two-socket servers. Some manufacturers have options with the placement of one four-socket one instead of two two-socket servers. The main pros and cons listed below.

Pluses of TWIN servers:
  1. Everything is located in one chassis.
  2. There are several types of networks in each server. These can be: Ethernet (100Mb / s, 1Gb / s, 10Gb / s), InfiniBand (SDR, DDR, QDR, FDR).
  3. Built-in cooling and power supply elements in a number of models have fault tolerance elements.
  4. In a number of TWIN servers, hot swap of all replaceable components.
  5. Use standard PCI-e expansion cards.
  6. The possibility of organizing an integrated disk storage system.
  7. Density placement in the closet.
  8. Price lower than Blade and RACK server.

Minuses:
  1. Requires external network switches.
  2. The impossibility of rejection server-blade as an independent unit.
  3. In some cases, there is a restriction in the use of components in a thermopacket (for example, the highest-end processors cannot be installed due to overheating).
  4. With full closure of the cabinet with TWIN-servers, increased requirements for the engineering infrastructure (power supply and cooling).
  5. Server density is lower than that of blades.

As we see from the pros and cons, the TWIN server and the blade server are more likely not competitors, but an organic complement to each other.

One of the brightest representatives of the TWIN servers are the Dell C6000 series servers. They represent a 2U's constructive with two power supply units and the ability to install two, three or four server modules. You can install two or three PCI-e expansion cards on each server.

TWIN-   Dell 6000

Microserver


Our story will not be complete if we do not tell about the latest trends of server constructives for data centers. It's about microservers. These are single-socket servers with minimization of size and power consumption. You should not count on serious performance characteristics. One of the representatives of this type of server is the server company Supermicro, presented in the figure.



As can be seen from the figure, the density of this solution is already equal to 4 servers per 1U. The emergence of this class of servers is dictated by low server requirements for most applications used by clients. Microservers can be applied as an alternative to virtualization. When any application is not recommended to be virtualized for one reason or another. Small microservers are also suitable for typical low-loaded office tasks.

Conclusion


I tried not to go into the details of each individual manufacturer. These details can be explored directly on the sites of these manufacturers.

It is difficult to make an unequivocal conclusion as to which of the solutions described above is best suited for a particular customer. For example, the density of blade servers is much higher than that of TWIN servers, which allows you to place more servers at the minimum place. On the other hand, the latency of 10GbE modules for blade baskets can be higher than the latency of PCIe 10GbE cards for TWIN servers. Or, peak processor performance in high-density Blade servers is lower than that of processors in TWIN solutions. But to argue about the advantages of one high-density solution over another can only be based on a specific task. We are ready to share our opinions with interested people, considering their specific tasks.

Source: https://habr.com/ru/post/175155/


All Articles