📜 ⬆️ ⬇️

Administrator's summary: blaze and poverty of blade systems

image alt text


In the Outline, we often bypass enterprise-technology because of their low applicability in not-so-large projects. But today's article is an exception, because it will be about modular systems, "blades".


There are not so many architectural delights in the IT world that would be enveloped in a large halo of "incredible coolness" and a comparable set of myths. Therefore, I will not complicate even more, and just talk about the features and applicability of such systems in practice.


Lego for engineer


The blade server is almost a normal server, which has a familiar motherboard, RAM, processors and many auxiliary systems and adapters. But "almost" is that such a server is not designed for autonomous work and comes in a special compact package for installation in a special chassis.


Chassis - or "basket" - is nothing like a big box with seats for servers and additional modules. All servers and components are connected using a large switching board (Backplane) and form a blade system .


If you disassemble the entire system into components, then the next slide will be on the table:



From the usual server cabinet, all this stuff is compact in size (usually 6-10U) and a high level of reliability, since all components can be reserved. Here, by the way, lies one of the myths: a dozen blades are not going into one big server. It will be just a dozen servers with a common infrastructure.


By the way, HPE has solutions that resemble traditional blade servers - HPE Superdome . As blades there are used processor modules with RAM. In such solutions, the entire system really is one high-performance server.

The nuances of architectural solutions from different manufacturers of blade systems have already been discussed at Habré (the article, though old, but relevant in its fundamentals), so I use the HPE blade system - BladeSystem c7000 for illustration.


In the role of the blades can be:



The picture below shows the complete HPE BladeSystem c7000 system. The layout of the components is clear and so - pay attention only to the Interconnect modules section. A fail-safe pair of network devices or pass-thru modules is installed in each row for simple forwarding of server network interfaces to the outside.


image alt text


The HPE ProLiant BL460c Gen8 compact blade fits only two 2.5-inch disks. For more beauty, instead of disks, you can use network boot from a SAN or PXE disk system.


image alt text


Below is a more compact blade system from IBM. The general principles are the same, although the location of the nodes here is different:


image alt text


The most interesting part of the blades is, in my opinion, the network component. With the use of fashionable converged switches, you can work wonders with the internal network of blade systems.


Some network and enterprise magic


The network modules can be special Ethernet switches or SAS, or those who can both. Of course, an ordinary switchboard cannot be installed in a blade system, but compatible models are made by familiar brands. For example, the "great three" HPE, Cisco, Brocade. In the simplest case, these will simply be network access modules that bring all 16 blades out through 16 Ethernet ports — the HPE Pass-Thru .


image alt text


Such a module will not reduce the number of network wires, but will allow you to connect to the corporate LAN with minimal investment. If, instead, use an inexpensive Cisco Catalyst 3020 with 8 1GbE Ethernet ports and 4 1GbE SFP ports, then only a few common chassis ports will need to be connected to the public network.


image alt text


Such network devices do not differ from ordinary ones in their capabilities. The HPE Virtual Connect (VC) modules look much more interesting. Their main feature is the ability to create several separate networks with flexible LAN and SAN bandwidth allocation. For example, you can lead to the 10GbE chassis and "cut" out of it 6 Gigabit LAN and one 4Gb SAN.


image alt text


At the same time, VC supports up to four connections to each server, which opens up certain spaces for creativity and cluster assembly. Other manufacturers have similar solutions - something similar from Lenovo is called IBM BladeCenter Virtual Fabric .


Contrary to popular belief, the blades themselves do not differ from ordinary servers, and do not provide any particular advantages in terms of virtualization. Interesting opportunities appear only with the use of special, vendor-locked technologies, such as HPE VC or Hitachi LPAR.

Multiple IPMI from one console


You can use the BMC hardware control modules (iLO in the case of HPE) to configure the blade servers. The administration and remote connection mechanism differs little from a regular server, but the Onboard Administrator (OA) control modules themselves can reserve each other and provide a single entry point to manage all devices in the chassis.


OA can be with a built-in KVM console for connecting an external monitor, or with only one network interface.


image alt text


In general, administration through OA is as follows:


  1. You connect to the web interface of the Onboard Administrator console. Here you can see the status of all subsystems, disable or remove the blade, update the firmware, etc.


  2. If you need to get to the console of a specific server, select it in the list of equipment and connect to its own iLO. There is already all the usual set of IPMI, including access to the images and reboot.


    image alt text



Even better, connect the blade system to external control software like HPE Insight Control or its replacement OneView . Then you can configure the automatic installation of the operating system on the new blade and the cluster load distribution.


Speaking of reliability, blades break down just like normal servers. Therefore, when ordering a configuration, do not neglect the redundancy of components and careful study of the instructions for the firmware. If the suspended Onboard Administrator delivers only inconvenience to the administrator, then an incorrect update of the firmware of all elements of the blade system is fraught with its inoperability.


But behind all this magic, we completely forgot about mundane matters.


Do you need a blade in your company?


High density, a small amount of wires, control from one point is all well and good, but let's estimate the cost of the solution. Suppose, in an abstract organization, you need to start at once 10 identical servers. Compare the cost of blades and traditional HPE ProLiant DL rack-mount models. For ease of evaluation, I do not take into account the cost of hard drives and network equipment.


Blades:


NameModelamountCost of
ChassisHPE BladeSystem c7000one603 125 ₽
Power Supply2400Wfour-
Control moduleHPE Onboard Administratorone-
Network moduleHPE Pass-Thru2211 250 ₽
BladeHPE ProLiant BL460c Gen8ten4 491 900 ₽
CPUIntel Xeon E5-262020-
RAM8 GB ECC Reg40-
Total:5 306 275 ₽

Prices are valid on 06.02.2017, source - STSS
Analogs - HP ProLiant DL360p Gen8 :


NameModelamountCost of
PlatformHP ProLiant DL360p Gen8ten3 469 410 ₽
CPUIntel Xeon E5-262020-
Power Supply460W20-
RAM8 GB ECC Reg40-
Total3 469 410 ₽

Prices are valid on 06.02.2017, source - STSS


The difference is almost two million rubles, while I did not lay out full fault tolerance: an additional control module and, ideally, another chassis. Plus, we are depriving us of convenient network switching due to the use of the cheapest pass-thru modules for simple output of the network interfaces of the servers to the outside. Virtual Connect would be more appropriate here, but the price ...
It turns out that "head on" saving will not work, so let's move on to the other pluses and minuses of the blades.


Some more arguments


The obvious advantages of blade systems include:



But without cons:



So what to choose


Blades look very organic in really large data centers, such as hosting companies. In such scenarios, the speed of scaling and the maximum density of equipment placement come out on top - saving on space and administration may well pay for the basket and all kinds of Virtual Connect.


In other cases, the use of ordinary rack servers seems to be more reasonable and universal. In addition, the widespread adoption of fast virtualization systems further reduced the popularity of blades, since most applications can also be compressed using virtual servers. Needless to say, managing virtual machines is even more convenient than blades.


If you have used Blade Systems in not the largest companies - share your impressions of the administration.


')

Source: https://habr.com/ru/post/323386/


All Articles