📜 ⬆️ ⬇️

100GbE: luxury or urgent need?

IEEE P802.3ba, a standard for data transmission over 100-gigabit Ethernet channels (100GbE), was developed from 2007 to 2010 [3], but it became widespread only in 2018 [5]. Why precisely in 2018, and not before? And why immediately in droves? There are at least five reasons for this ...



Developed by IEEE P802.3ba, primarily to meet the needs of the data center and the needs of Internet traffic exchange points (between independent operators); as well as to ensure the smooth operation of resource-intensive web services, such as portals with a large number of video content (for example, YouTube); and for high performance computing. [3] Ordinary Internet users also contribute to changing bandwidth requirements: many have digital cameras, and people want to transfer the content they captured over the Internet. So over time, the amount of content circulating on the Internet becomes more and more. Both at the professional and at the consumer level. In all these cases, when transferring data from one domain to another, the aggregate bandwidth of key network nodes has long exceeded the capabilities of 10GbE ports. [1] This is the reason for the emergence of a new standard: 100GbE.


Large data centers and cloud service providers are already actively using 100GbE, and they plan to gradually switch to 200GbE and 400GbE in a couple of years. At the same time, they are already looking at speeds exceeding terabit. [6] Although there are among major suppliers, there are those who switch to 100GbE only last year (for example, Microsoft Azure). Data centers that perform high-performance computing for financial services, government platforms, oil and gas platforms, and utilities have also begun to switch to 100GbE. [five]


In corporate data centers, the need for bandwidth is somewhat lower: only recently 10GbE has become an urgent need, not a luxury. However, since the rate of traffic consumption is growing faster, it is doubtful that 10GbE will live in corporate data centers for at least 10 or even 5 years. Instead, we will see a quick transition to 25GbE and even faster at 100GbE. [6] Because, as Intel analysts say, the intensity of traffic inside the data center increases annually by 25%. [five]


Analysts Dell and Hewlett Packard state [4] that 2018 is the year 100GbE for the data center. Back in August 2018, the deliveries of 100GbE equipment doubled the supply for the entire 2017 year. And the pace of supply continues to grow, as the data center began to move away from 40GbE on a massive scale. It is expected that by 2022, 19.4 million 100GbE-ports will be supplied annually (in 2017, by comparison, this figure was 4.6 million). [4] With regard to costs, in 2017, $ 7 billion was spent on 100GbE ports, and in 2020, it is projected that about $ 20 billion will be spent (see Figure 1). [one]



Figure 1. Statistics and forecasts of demand for network equipment


Why now? 100GbE is not such a new technology, so why now is there such a stir around it?


1) Because this technology is ripe and cheaper. It was in 2018 that we crossed the line when using platforms with 100-gigabit ports in a data center became more cost-effective than stacking several 10-gigabit platforms. Example: Ciena 5170 (see fig. 2) is a compact platform providing aggregate bandwidth of 800GbE (4x100GbE, 40x10GbE). If several 10 Gigabit ports are required to provide the necessary throughput, then the cost of additional hardware, additional space occupied, excess power consumption, routine maintenance, additional parts and additional cooling systems add up to a fairly tidy sum. [1] For example, Hewlett Packard specialists, analyzing the potential benefits of switching from 10GbE to 100GbE, came up with the following figures: productivity is higher (by 56%), total costs are lower (by 27%), electricity consumption is lower (by 31%), simplification cable interconnections (by 38%). [five]



Figure 2. Ciena 5170: Example Platform with 100 Gigabit Ports


2) Juniper and Cisco have finally created their own ASIC chips for 100GbE switches. [5] That is an eloquent confirmation of the fact that the technology 100GbE - really matured. The fact is that ASIC microcircuits are cost-effective to create only when, firstly, the logic implemented on them does not require changes in the foreseeable future, and secondly, when a large number of identical microcircuits are manufactured. Juniper and Cisco would not produce these ASICs without being sure of 100GbE maturity.


3) Because Broadcom, Cavium, and Mellanox Technologie began to stamp processors with 100GbE support, and these processors are already used in switches from manufacturers such as Dell, Hewlett Packard, Huawei Technologies, Lenovo Group, and others. [5]


4) Because servers located in server racks are increasingly equipped with the latest Intel network adapters (see Figure 3), with two 25-gigabit ports, and sometimes even converged network adapters with two 40-gigabit ports (XXV710 and XL710) .



Figure 3. Intel's Latest Network Adapters: XXV710 and XL710


5) Because the 100GbE equipment is backward compatible, which simplifies deployment: you can reuse already-thrown cables (you just need to connect a new transceiver to them).


In addition, the availability of 100GbE prepares us for new technologies, such as “NVMe over Fabrics” (for example, Samsung Evo Pro 256 GB NVMe PCIe SSD; see figure 4) [8, 10], “Storage Area Network” (SAN) / “Software Defined Storage” (see Fig. 5) [7], RDMA [11], which without 100GbE could not realize their full potential.



Figure 4. Samsung Evo Pro 256 GB NVMe PCIe SSD



Figure 5. “Storage Area Network” (SAN) / “Software Defined Storage”


Finally, as an exotic example of the practical demand for the use of 100GbE and related high-speed technologies, a research cloud from the University of Cambridge (see Fig. 6), built on the basis of 100GbE (Spectrum SN2700 Ethernet switches), can be given, among other things, Ensure efficient operation of the NexentaEdge SDS distributed disk storage, which can easily overload the 10 / 40GbE network. [2] Such high-performance scientific clouds deploy to solve a variety of applied scientific problems [9, 12]. For example, medical scientists use these clouds to decipher the human genome, and 100GbE channels are used to transfer information between university research groups.



Figure 6. Fragment of the scientific cloud of the University of Cambridge


Bibliography
  1. John Hawkins. 100GbE: Closer to the Edge, Closer to Reality // 2017.
  2. Amit Katz. 100GbE Switches - Have You Done The Math? // 2016.
  3. Margaret Rouse. 100 Gigabit Ethernet (100GbE) .
  4. David Graves. Dell EMC Doubles Down on 100 Gigabit Ethernet for the Open, Modern Data Center // 2018.
  5. Mary Branscombe. The Year of 100GbE in Data Center Networks // 2018.
  6. Jarred Baker. Moving Faster in the Enterprise Data Center // 2017.
  7. Tom Clark. Fiber Channel and IP SANs. 2003. 572p.
  8. James O'Reilly. Network Storage: Tools and Technologies for Your Data Data // 2017. 280p.
  9. James Sullivan. University of Texas at Austin / Texas State University of Texas Cluster Competition: Reproducing the Skylake and NVIDIA V100 architectures Multi-body Potential for the Intellectual and Psychological Transition // Parallel Computing. v.79, 2018. pp. 30-35.
  10. Manolis Katevenis. The Next Generation of Exascale-class Systems: The ExaNeSt Project // Microprocessors and Microsystems. v.61, 2018. pp. 58-71.
  11. Hari Subramoni. RDMA over Ethernet: A High-Quality Interconnects for Distributed Computing. 2009
  12. Chris Broekema. Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA // Future Generation Computer Systems. v.79, 2018. pp. 215-224.

Ps. The article was originally published in the "System Administrator" .


')

Source: https://habr.com/ru/post/450156/


All Articles