Continued, the
beginning was here . Separately, I would like to draw attention to the
HPE Synergy Planning Tool .
In the continuation of the announced cycle of articles about HPE Synergy, we begin with a description of the chassis, or, as the manufacturer writes, “The HPE Synergy 12000 Frame is a key element of HPE Synergy”.
General view of the chassis HPE Synergy 12000 Frame
It can be seen that the disk module D3940 can be installed in bays 1 and 2 (zone 1), or, for example, 3 and 4 (zone 2), and in bays 2 and 3 it is impossible, because there are different zones. Horizontal partitions, apparently, can be dismantled to install full-size blade servers, but how (independently or not) is not clear, I also plan to ask this question to the manufacturer on occasion. It looks interesting handles for rack mounting.
')
At the moment there are no normal photos of the chassis, and in the information materials the quality is not very good, so I had to read a little - draw the chassis using Visio stencils taken from the
VisioCafe website.
HPE Synergy 12000 Frame chassis, front view:
The front panel shows that Composer is installed at the top of the Appliance Bay 1 compartment (Russian documentation calls it Linker, literal translation is Composer (compiler), but IMHO is better to leave Composer) and, apparently, in this compartment for the main one (or the first ) the chassis is always installed only he. Image Streamer is installed in Appliance Bay 2 (but its Russian documentation names the Image Distributor. Andrei Rublev and the harsh faces of Orthodox saints are immediately remembered. The correct translation is the streaming loader of operating systems). Instead of Image Streamer, a second Composer can be installed for the case of a single chassis. In case the chassis is more than one (two or more), HPE recommends installing Composer on different chassis. About these devices I plan a separate review. Next to them, the information compartment is pulled down over the top part by moving down.
Further, the compartments for the installation of blade servers are divided into zones of two compartments in the zone. Yellow numbers indicate the location of zones, red numbers - compartments. In compartments 1 and 2, that is, entirely in zone No. 1, the HPE Synergy D3940 disk module is installed on 40 disks, then - various types of servers:
- in compartment 7 - a dual-drive and dual-processor Synergy (hereinafter - SY) 480 Gen9 is installed;
- in compartment 8 - the diskless (sic!) and dual-processor SY 480 Gen9 are installed;
- in compartments 3 and 9 - four-disk and four-processor SY 660 Gen9;
- in compartments 4 and 10 - a two-disk and dual-processor SY 620 Gen9 - in the center you can see the plug of the stacking connector;
- In compartments 5-6-11-12, an SY 680 Gen9 octodisk and four-processor, in fact, these are two pieces of SY 660 Gen9.
On the right you can see Synergy Console - a console for connecting a PC / laptop and chassis control. The console has 3 ports - DisplayPort v1.2, USB 2.0 and RJ-45. Behind the console connects to the Frame Link Module (hereinafter - FLM), which I will analyze separately with Composer and Image Streamer. Apparently, the console is placed in the presence of one chassis, when the chassis is larger than one - FLM is installed in the rear compartments, to which you can also connect to control the chassis, also FLM of different chassis are stacked with each other. The documentation says that the connections are identical in functionality, although they suggest connecting to FLM using “simplet VNC services (free VNC software may be downloaded from the internet)” at “192.168.10.1:5900”.

While I was writing, I found a table with numbering options for compartments for various equipment layouts:
Behind everything is fairly standard - 6 compartments for switches (we will consider them separately), 10 fans, the chassis goes by default with all ten, and the documentation says that this number cannot be different, 6 power supplies - there is an opportunity to order any quantity, but the manufacturer strongly recommends using HPE PowerAdvisor to calculate the planned power consumption. As far as I can tell, the “HPE Synergy Planning Tool”, the link to which I gave above, also contains this functionality.

To the left of the power unit â„–1 - also the information on the label, which is drawn to itself. Thus, the minimum configuration of the chassis:
- chassis constructive;
- ten fans;
- two power supplies;
- one interconnect module;
- one flm;
- one composer.
Go to the servers. For a start, general information. Each server has a separate information label that contains:
- product serial number
- iLO information
- QR code that points to mobile-friendly documentation - it smiled.
The easiest server blade is HPE Synergy 480 Gen9. It exists in two versions: with a basket for two SFF disks or with no basket at all. The difference in cost is about 100 USD according to the price list. It should be noted that even in the “without disks” version, an SD card of up to 32 GB can be inserted into a blade server, or an 8 GB USB flash drive (there are no options here).

Supported processors: up to two pieces per server Xeon E5-26xx v4, from Xeon E5-2603 v4 (1.7GHz / 6-core / 15MB / 85W) to E5-2699 v4 (2.2GHz / 22-core / 55MB / 145W).
Chipset: Intel C610 Series Chipset.
Memory: 12 slots for installation on the processor, total - 24 slots, maximum - 1.5 TB (24 pieces of 64 Gb DDR4 LRDIMM).
For the installation of switching cards available 3 compartments for mezzanine cards.
Network: two converged (i.e. support different types of networks) adapters - SY 3820C 10 / 20Gb (FCoE + Ethernet) and SY 2820C 10 / 20Gb (FCoE or iSCSI + Ethernet). Also - two SAN adapters - SY 3830C and SY 3530C - 16 Gb FC. It is planned to describe in detail the switching on backplane in the article about interconnect, now we can say that each line has 4 lines to the switch, which in total can yield 40 Gbit / s using the formula 20 (up) +20 (down).
Disks: support SFF and uFF disks. I did not see uFF disks (reads like a micro form factor), I began to search and it turned out that these are two m.2 disks installed in a box the size of an SFF disk, and a sled for mounting in the SFF compartment is attached to the box. Maximum available:
- SFF SAS - 4.0 Tb as 2 x 2.0 Tb;
- SFF SATA - 4.0 Tb as 2 x 2.0 Tb;
- SFF SAS SSD - 7.68 Tb as 2 x 3.84 Tb;
- SFF SATA SSD - 3.2 Tb as 2 x 1.6 Tb;
- SFF NVMe SSD - 4.0 Tb as 2 x 2.0 Tb;
- uFF SATA SSD - 1.36 Tb as 4 x 340 Gb.
Disk controllers:
- HPE Smart Array P240nr Controller with 1GB Flash-Backed Write Cache (FBWC), RAID 0, 1, 10, 5, 6, and 1 ADM (two drives in RAID1 and one next as hot-spare);
- HPE Smart Array P542D Controller with 2GB Flash-Backed Write Cache (FBWC), RAID 0, 1, 10, 5, 50, 6, 60, 1 ADM, and 10 ADM - this controller is required to connect to the SY D3940 disk module;
- HPE H240nr Smart HBA - RAID 0, 1, 10, 5 - if you install 4 uFF disks in the server;
- HPE B140i (chipset SATA).
Options: you can install one NVIDIA Tesla M6 Mezzanine GPU video card - 8 Gb DDR5 video memory, up to 16 users of virtual desktops. By the way, that one more "boiler" - up to 100W, in fact, as one more processor.
Supported operating systems:
- Microsoft Windows Server;
- Microsoft Hyper-V Server;
- Red Hat Enterprise Linux (RHEL);
- SUSE Linux Enterprise Server (SLES);
- VMware ESXi.
For some reason, Citrix XenServer 6.5 is not in the list, although its support is announced for NVIDIA Tesla M6. In addition to all this there is: one external USB 3.0 port, one internal USB 3.0 port, one internal port for microSD-cards. Management by iLO or through Composer with built-in OneView functionality.
In general, the workhorse and suitable for the implementation of graphic VDI-farms. Analog among rack servers - HPE ProLiant DL380 Gen9.
Next comes HPE Synergy 620 Gen9.

Supported processors: up to two pieces per server, but the processors are already more serious - Xeon E7-48xx v4 and E7-88xx v4, including Xeon E7-8890 v4 (2.2GHz / 24-core / 60MB / 165W).
Chipset: Intel C602J Series Chipset.
Memory: 24 slots for installation on the processor, total - 48 slots, maximum - 3.0 TB (48 pieces of 64 Gb DDR4 LRDIMM). There are 5 mezzanine card compartments available for installing switching cards.
Network: no change regarding SY 480 G9.
Disks: support SFF and uFF disks. Strangely enough, it also supports only two slots of SFF disks (see the left part of the server in the picture). Maximum available:
- SFF SAS - 4.0 Tb as 2 x 2.0 Tb;
- SFF SATA - 4.0 Tb as 2 x 2.0 Tb;
- SFF SAS SSD - 7.68 Tb as 2 x 3.84 Tb;
- SFF SATA SSD - 3.2 Tb as 2 x 1.6 Tb;
- SFF NVMe SSD - 4.0 Tb as 2 x 2.0 Tb;
- uFF SATA SSD - 1.36 Tb as 4 x 340 Gb.
Disk controllers: no change.
Options: no.
Supported operating systems: no change.
In addition to all this, there is: one external USB 2.0 port, one internal USB 2.0 port (as it is written in the documentation, I suspect that there is an error, and actually ports 3.0), one internal port for microSD cards.
Management by iLO or through Composer with built-in OneView functionality. The closest counterpart among the rack servers is HPE ProLiant DL560 Gen9, but the rack server does not support Xeon's E7-88xx v4.
We turn to heavy artillery - HPE Synergy 660 Gen9.

Supported processors: up to four pieces (three is not possible) to the Xeon E7-46xx v4 server (in SY 620 G9 - 48xx, it is important not to confuse), including E5-4669 v4 (2.2GHz / 22-core / 55MB / 135W).
Chipset: Intel C610 Series Chipset (as in the SY 480 G9).
Memory: 12 slots for installation on the processor, total - 48 slots, maximum - 3.0 TB (48 pieces of 64 Gb DDR4 LRDIMM). There are 6 compartments for mezzanine cards available for installing switching cards.
Network: no change.
Disks: support SFF and uFF disks. Maximum available:
- SFF SAS - 8.0 Tb as 4 x 2.0 Tb;
- SFF SATA - 8.0 Tb as 4 x 2.0 Tb;
- SFF SAS SSD - 15.36 Tb as 4 x 3.84 Tb;
- SFF SATA SSD - 6.4 Tb as 4 x 1.6 Tb;
- SFF NVMe SSD - 8.0 Tb as 4 x 2.0 Tb;
- uFF SATA SSD - 2.72 Tb as 8 x 340 Gb.
Disk controllers: no change.
Options: no.
Supported operating systems: no change.
Ports and management: no change. The closest analogue of the rack server is HPE ProLiant DL580 Gen9, but the blade server does not support Xeon's E7-88xx v4.
Well, the final blade server is HPE Synergy 680 Gen9. It can be seen that it is “glued together” from two SY 620 Gen9, all the characteristics of the SY 680 Gen9 are equal to twice the characteristics of the SY 620 Gen9.

Supported processors: up to 4 pieces per server, Xeon E7-48xx v4 and E7-88xx v4.
Chipset: Intel C602J Series Chipset.
Memory: 24 slots for installation on the processor, total - 96 slots, maximum - 6.0 TB (96 pieces of 64 Gb DDR4 LRDIMM). There are 10 compartments for mezzanine cards available for installing switching cards.
Network: no change regarding SY 480 G9.
Disks: support SFF and uFF disks. Maximum available:
- SFF SAS - 8.0 Tb as 4 x 2.0 Tb;
- SFF SATA - 8.0 Tb as 4 x 2.0 Tb;
- SFF SAS SSD - 15.36 Tb as 4 x 3.84 Tb;
- SFF SATA SSD - 6.4 Tb as 4 x 1.6 Tb;
- SFF NVMe SSD - 8.0 Tb as 4 x 2.0 Tb;
- uFF SATA SSD - 2.72 Tb as 8 x 340 Gb.
Disk controllers: no change.
Options: no.
Supported operating systems: no change.
Besides all this, there is: one external USB 2.0 port, one internal USB 2.0 port (information on USB ports 2.0 from SY 620 G9 is repeated in the documents on SY 680 G9, apparently, this is another vendor question), one internal port for microSD- kart. Management by iLO or through Composer with built-in OneView functionality.
In fact, this blade server is almost the complete equivalent of the HPE ProLiant DL580 Gen9 rack server (not an equivalent for the number of internal drives and PCI-E slots). It is interesting to compare that you can install 6 SY680 Gen9 servers in two 10U chassis (total 20U), while the same computing power in the form of six rack-mount ProLiant DL580 Gen9 servers already takes 24U (i.e., the density of blade servers in this case is higher by 20%).
Total:
- SY 480 Gen9 - server for infrastructure tasks or VDI / VDI graphics;
- SY 620 Gen9 and SY 680 Gen9 - servers for DBMS or large memory applications (OLAP?);
- SY 660 Gen9 - server for business applications or high density virtualization of infrastructure servers.
This concludes the description of the chassis and computing modules. The next part is planned to consider the disk module and, possibly, SAS switches.
PS - found in one of the documents, “Five steps to building a Composable Infrastructure with HPE Synergy”, a good picture summarizing the description:
