
As the data traffic between the components of high-loaded systems grows, the problem of its exact, and rather even point delivery, becomes more acute. The ideal solution in this case could be a universal information exchange technology, which has high bandwidth and performs effective interaction of both local and network devices, that is, in essence, combining them into a single matrix of the scale of the data center or network core. It's funny, but true: one such “matrix” (or rather, data switching technology) appeared almost simultaneously with the Wachowski film. This is an Infiniband standard.
Infiniband technology originated in 1999 when two competing projects were created under the authorship of the largest manufacturers of communication equipment of that time: Compaq, IBM, Hewlett-Packard, Intel, Microsoft and Sun. From an architectural point of view, it is a switched network of high-speed connections between computing modules and storage devices of the scale of a supercomputer or computing center. When designing the Infiniband standard, the following priorities were laid in it:
- Hierarchical traffic prioritization;
- Low latency;
- Scalability;
- Possibility of reservation;
well and, probably, the most important thing is the possibility of choosing the right one from the range of speeds, from high to very high. The table below shows Infiniband simplex bandwidth in terms of useful traffic for different modes and number of lines.
| SDR | DDR | QDR | FDR | Edr |
1X | 2 Gbit / s | 4 Gbit / s | 8 Gbit / s | 13.64 Gbit / s | 25 Gbit / s |
4X | 8 Gbit / s | 16 Gbit / s | 32 Gbit / s | 54.54 Gbit / s | 100 Gbit / s |
12X | 24 Gbit / s | 48 Gbit / s | 96 Gbit / s | 163.64 Gbit / s | 300 Gbit / s |
SDR - Single Data Rate; DDR - Double Data Rate; QDR - Quad Data Rate; FDR - Fourteen Data Rate; EDR - Enhanced Data Rate.
The Infiniband bus is serial, just like, say, PCIe or SATA, but unlike the latter, it can use both fiber and copper transmission media, which allows it to serve both internal and heavily external connections. The coding of the transmitted data is performed according to the 8B / 10B scheme for speeds up to QDR inclusive and according to the 64B / 66B scheme for FDR and EDR. Infiband lines are usually terminated with connectors CX4 (in the photo on the left) and QSFP, optics are increasingly used for high-speed links.
Promotion and standardization Infiniband is engaged in the InfiniBand Trade Association - a consortium of interested manufacturers, including IBM, Hewlett-Packard, Intel, Oracle and other companies. As for the equipment itself, i.e., Infiniband adapters and switches, Mellanox and QLogic (acquired by Intel in early 2012) occupy leading positions in the market.
')
Consider in more detail the architecture of Infiniband networks on the example of a small SAN.

Infiniband adapters fall into two categories: Host Channel Adapters (HCA) and Target Channel Adapters (TCA). NSA are installed in servers and workstations, TCA - in storage devices; respectively, the former control and transfer data, the latter execute commands and also transmit data. Each adapter has one or more ports. As already mentioned, one of the features of Infiniband is high-precision routing of traffic. For example, moving data from one store to another must be initiated by NSA, but after the transfer of control directives, the server leaves the game - all traffic moves directly from one store to another.

The photo on the left shows the HCA QLogic QLE7340 adapter (QDR, 40 Gbit / s).
As you can see, the number of connections between Infiniband subscribers is redundant. This is done to increase the transmission speed and provide redundancy. The collection of end users connected to one or more switches is called a subnet; subnet map, that is, the set of available routes between users, is in the memory of the subnet manager - there must be at least one. Multiple subnets can be networked together using Infiniband routers.
Infiniband was designed not only as a means of optimal data transfer, but also as a standard for the direct exchange of server memory contents; so, on its basis, the Remote Direct Memory Access (RDMA) protocol works, which allows remotely receiving and transmitting memory areas without the participation of the operating system. In turn, RDMA is based on a series of more specialized protocols that extend its functionality.
There are also protocols on the top of Infiniband standard TCP / IP protocol stack, they are included, I think, in all software sets for Infiniband, both branded, from various manufacturers of network devices, and open.
Infiband Speed Growth ChartIn principle, we can say that Infiniband finds its application everywhere where large amounts of data are transferred at high speeds, whether it is about supercomputers, high-performance clusters, distributed databases, etc. For example, such an active ITA member as Oracle has been using Infiniband for quite a long time as one can say the only means for internal connections in its proprietary clustering and even developed its own data transfer protocol over Infiband - Reliable Datagram Sockets (RDS). Infiniband infrastructure is quite expensive, therefore it is difficult to call it widespread. But the chances to meet her in person are definitely there if you are in your career going towards large gigabits per second and terabytes. Then you will dig deep into the topic, but in the meantime you can limit yourself to just an educational program - you just read it.