At the moment, Extreme Networks has in its portfolio the most powerful switch in the world, the Black Diamond X8, which has really impressive performance in terms of throughput per chassis (20Tb / s), power consumption, delays and at the same time takes 1/3 of the standard rack. The switch has received many awards and has broken many records, which you can read at your leisure in the reports of the famous company Lippis
www.bradreese.com/lippis-report-dec-2011.pdf or other resources.
Since the device has a rather specific positioning and not every IT specialist will happen to see or work with such a miracle of Ethernet technology in his life, I would like to share my impressions of what I see under the screen:
The chassis itself came on the pallet. The four of us, and sometimes together, where there was not enough space, we brought it quite easily, it seemed that the total weight is not more than 100 kg.
')
To the chassis itself, there are still many other boxes: factories, line cards, module management, power supplies ...
The switch is built according to the currently orthogonal architecture that is gaining momentum, when the line cards are directly connected to the module's factories, there are no limitations on the backplane and midplane, everything is ready for 100 Gigabit Ethernet. The photo was taken with the fan blocks already taken and viewed from top to bottom, then in order are located: connectors for power supply, a niche for power supplies, five small connectors for connecting the actual blocks of fans, and also at the very bottom are guides for four factories of modules (above the same ones in the photo are just not visible).
The switch has five fan blocks, which are reserved according to the 4 + 1 scheme, in each of the blocks there is also a 5 + 1 reservation.
The blowout of the iron itself is organized by Front-to-Back, which allows you to organize the necessary direction of thermal corridors in the data center.
One of the most essential things when installing such expensive equipment is already included - an antistatic bracelet.
The factories are installed fairly easily, if you get into the guides correctly)) There are closers from above and below that you need to fix later.
The factories are also reserved 3 + 1, and even at the exit of one, all ports will operate at full speed. The hardware installation manual recommends installing the boards from left to right in the first three places, leaving the last reserve, if you didn’t immediately have enough funds for all the factories.
This is how it looks after the factories are installed. Each such connector, as in the photo on the right, provides 320Gb bandwidth, which means that when installing three factories and a line card on 24 40GbE ports, we get a linear speed on all ports (960 Gbps). Extreme Networks also developed factories that have half the bandwidth, for those who do not need a high density of 40GbE per slot and are going to install cards with support for 12x40GbE and 48x10GbE ports, respectively.
But after the installation of the fans.
On the front side under the decorative panel there are eight cells for installing power supplies, which are traditionally reserved N + 1, N + N. After the power is turned on, the installed boards are initialized, and the required budget for their power is calculated accordingly. If you don’t get into the budget some line cards may not start.
The blocks are installed easily, while fixing with a small latch. Places for installation of line cards are closed with a solid panel, which is intended only for installation at the time of transportation. If the chassis is not completely filled with line cards, then the blank panel for each slot must be purchased separately, they are not included.
The switch has a powerful Control-plane which is controlled using a "2GHz Intel i7 Dual Core CPU." The management module itself has a form factor different from line cards and occupies only half the slot width.
It was interesting to see on the board a place for decoupling SyncE (synchronous Ethernet) support elements, although the positioning of the device is completely different and the vendor declares that there is no support and will not be)) At the moment, this functionality is supported only on Summit X460 switches and E4G routers.
On line cards with support for 48 SFP + and 24 QSFP + ports, 2 and 4 ASICs are installed, respectively. Extreme Networks uses the latest Broadcom ASICs in their equipment, which allows for maximum performance with minimum power consumption, about 5W per 10GbE port.
After switching on, we immediately get into the already beloved (for those who worked in it) interface of the ExtremeXOS proprietary operating system, which is absolutely the same with the interface of even the youngest model. It has also been verified that the switch supports SFP + and QSFP + OEM manufacturers.
Impressions:
There were pleasant impressions that such a powerful iron is so easy to assemble and start, I would like to look at his work under full load. Also pleasantly surprised that the switch has a fairly low noise level and if you compare an unloaded chassis Black Diamond X8 and rack-mount Summit X650-24x, then most likely the winner will receive a Summit.
Extreme Networks continues to not only actively promote this product, but does not stop at its development, as evidenced by the recent announcement (at Interop 2012) of new line cards that will support 48 ports of 10GBase-T standard. And since at the same event, Intel introduced CNA adapters of the same standard, with support for convergent Ethernet over copper, this solution is even more attractive for data centers.
Distribution of Extreme Networks solutions in Ukraine and CIS countries
Extreme Networks Authorized Training Courses
MUK-Service - all types of IT repair: warranty, non-warranty repair, sale of spare parts, contract service