📜 ⬆️ ⬇️

Blade Switching in a Dell PowerEdge M1000e Chassis

Today we will talk about Dell Networking MXL and PowerEdge MI / O Aggregator switches for installation in a Dell PowerEdge M1000e blade server modular chassis, designed for high-speed communication between servers, both within the chassis and with neighboring server groups, storage systems and connectivity. to the core of the network. They support the latest technologies for switching systems and due to this they can be used as part of complex network architectures not only to increase the speed of data exchange and create fault-tolerant configurations, but also for the convergence of data transmission and storage networks. Each of them provides 1.28 Tbit / s performance in full-duplex mode, supports up to 6 40 Gbit / s ports and is equipped with two slots for installing expansion modules that support various network interfaces. However, as the name implies, one of them belongs to the Dell Networking line of network equipment, and the other to the Dell PowerEdge server. Why - let's look at their study. Let's start with the blade chassis.



PowerEdge M1000e case

Acquaintance

The PowerEdge M1000e blade server modular chassis is currently the flagship company in the blade solution market. Configuration flexibility, compactness, centralized management, the ability to increase capacity during operation and the introduction of new technologies without fully re-equipping server cabinets are the main advantages of using solutions based on blade technology. It has been developing for a long time, and the PowerEdge M1000e is a mature, complete solution with a wide choice of options for bundling. Briefly consider his device.

On the front side there are sixteen compartments for installing half-height servers, as well as other compatible devices. For example, Dell releases iSCSI storage systems for installation directly into this chassis. The most popular server format is half-height, as in the photo, but there are both full-height and quarter-slot. Thus, it is possible to install up to 32 servers in one 10U chassis. Also on the front panel is an interactive mini-display and USB and VGA ports for connecting a keyboard, mouse and display.
')
On the back there are six compartments for power supplies (bottom) and nine for fans, all hot-swappable. On top are KVMs with the ability to connect to a remote console via twisted pair and two compartments for the Chassis Management Controller, which provides an interface for the complete control of both the chassis and the equipment installed in it. Between the fans are six compartments for installing network switches.

The main part of the basket is midplein, through which the installed equipment is powered and data is exchanged. Each server has network ports pre-installed by LOM or added with Mezzanine expansion cards, the switches for the basket have internal ports, all of them are switched via midplein. The data exchange rate on it can reach 8.4 Tbit / s in duplex mode. Installing the appropriate network switches supports up to 32 internal server ports per slot.

As a choice, Dell offers a dozen and a half devices of varying degrees of complexity and power, as well as supporting various switching technologies. These are proprietary Ethernet switches and Cisco, Fiber Channel Brocade and InfiniBand switches from Mellanox.

External switching
The diagram shows that the two slots A on the chassis are designed for Ethernet switches, while B and C may contain modules supporting Fiber Channel and InfiniBand. You can also see two CMC modules for protection against failures, which are connected to a dedicated control network. One module provides full functionality, the second is in hot standby. Between them, KVM, on which the RJ-45 port serves not for connecting to the network, but for the remote console. Two USB and VGA - for local.



Internal switching
The figure shows the internal switching scheme for half-height servers. It shows the interfaces that can be present on such servers: a built-in two- or four-port Ethernet adapter supporting 1 or 10 Gbps speed and a Mezzanine expansion card that can support up to two Fiber Channel and InfiniBand ports or up to four ports Ethernet



The following figure shows the distribution of ports when using four-port Ethernet adapters in servers using blade switches with 32 internal ports, for example, Dell Networking MXL.



Dell Networking MXL and PowerEdge MI / O Aggregator

General
Both switches support up to 32 internal 10 GbE ports for exchanging data between servers. Both are equipped with two 40 GbE QSFP + ports and two expansion slots for compatible modules. Each has two USB ports on the front panel - one for the console and one for the storage device.

Optical ports support both the connection of transceivers for subsequent switching by fiber-optic cables, and direct connection by a special cable supporting a speed range of 10 Gbps or 40 Gbps, but not intended for transmitting a signal over long distances. As an option, 40 Gbit / s splitter cables of 4 to 10 Gbit / s are available. The figure shows the supported expansion modules, transceivers and cables for them.



A module with four 10GBASE-T RJ-45 ports can be installed only one; in the second slot, you can install any other expansion module. In addition to this limitation, all other modules can be simultaneously used in any available combinations, both different and the same.

Differences



PowerEdge MI / O Aggregator
Designed for quick commissioning of the M1000e chassis and connecting to networks built on the basis of equipment from different manufacturers that supports different network technologies. For commissioning, you need to unpack the switch, install it in the chassis slot, assign the administrator's IP address and password, and connect uplink ports to a rack switch or a network core switch. On a rack switch, you need to configure the LAG in LACP mode. When using convergence, all the DCB settings (ETS and PFC) of the new MI / O Aggregator device will be taken from the rack switch using the DCBX protocol. To get started, you do not need to install additional licenses or pre-configure the blade switch.

By default, all switch ports are active and all VLANs on them are tagged. If it is necessary to define any ports untagged in this VLAN, this can be done through a simple graphical interface provided by the SMS module. This, however, does not mean that more detailed configuration of the device is impossible - it supports a wide range of possibilities, a command line is available for administration.

Dell Networking MXL
The switch supports dynamic routing protocols available in the Dell Networking Operating System 9 (OS9). It is a fully compliant industry standard operating system, managed via the command line and provides the administrator with a wide range of fine-tuning options. It is similar to Cisco IOS and supports PVST + for integration with it.

MXL supports the organization of a stack, which can include up to six devices, both located inside one case of the M1000e, as well as in different cases.

In addition to fault tolerance, this provides a high data exchange rate between servers in different packages and the ability to access switches in the stack as a single network device, with a single network address and general settings. Connection is carried out using the “ring” technology, i.e. The first and last assembly member must be interconnected.

If there is only one case, then both types of blade switches can support a stack created within one M1000e for redundancy and load distribution. Such a stack is called horizontal - the cables are located exactly in it. Vertical is a stack that contains up to 6 MXL switches installed in three different chassis as part of the same rack. It is organized as a “ring”. All equipment located in the three M1000e within the rack, in the case of such a connection is part of a single physical network, the aggregator of which is the MXL stack. From the outside, switches entering the stack look like a single device, with the same settings, providing a single set of services and even having a single IP address. However, in terms of fault tolerance, it is preferable to use two stacks in each blade chassis, combining switches from different chassis (see figure)



An alternative to the fault-enhancing stack can be the use of the Dell Virtual Link Trunking (VLT) functionality on two MXL switches in a blade chassis (we wrote about this in detail here ). Then, the blade servers in the chassis can be connected to two MXLs using the LACP ticking NIC, and the rack switches can be connected to two blade switches in the chassis using only one LAG. Usually on MXL for the organization of the VLTi link use two built-in ports 40 GbE, integrated into the LAG.

MXL stack configuration example
In this example, the configuration of the stack is shown using two MXL switches in factory A1 (MXL-A1) and A2 (MXL-A2), stacked together with two ports fortyGig 0/33 and 0/37.

It is necessary to consider the following feature of the stack. When updating the DNOS software on the stack, all stack switches will be overloaded. This procedure must be performed during the hours of lowest load. If this is unacceptable, then Virtual Link Trunking (VLT) technology is recommended instead of stacking.

Switch preparation:

Next, you need to put the switch ports in the stacking mode.

Verify that the stacking ports are not yet configured on the MXL-A1 switch:

  MXL-A1 # sho system stack-ports 
 Topology: Stand alone
 Interface Connection Link Admin Link Link Link
                            (Gb / s) Status Status Group
 -------------------------------------------------- ---------------- 


Usually two built-in 40 GbE ports are used for stacking. These are fortyGig 0/33 and 0/37 ports. The correspondence of the switch's physical ports to the stack-group numbers is shown in the figure below:



  MXL-A1 (conf) # stack-unit 0 stack-group 0
 MXL-A1 (conf) # stack-unit 0 stack-group 1 


For deterministic selection of the stack master, it is recommended to set priorities on each switch participating in the stack. The master with the highest priority becomes the master:

  MXL-A1 (conf) # stack-unit 0 priority 14 


Verify that the stacking ports are not yet configured on the MXL-A2 switch:

  MXL-A2 # sho system stack-ports 
 Topology: Stand alone
 Interface Connection Link Admin Link Link Link
                            (Gb / s) Status Status Group
 -------------------------------------------------- ---------------- 


Configure the stacking ports and priority on the MXL-A2 switch:

  MXL-A2 (conf) # stack-unit 0 stack-group 0
 MXL-A2 (conf) # stack-unit 0 stack-group 1
 MXL-A2 (conf) # stack-unit 0 priority 1 


After that, you must save the configuration and reboot both switches:

  MXL-A1 # wr
 MXL-A1 # reload
 MXL-A2 # wr
 MXL-A2 # reload 


Wait for the switches to reboot and connect the two stacking ports between the switches with 40GbE QSFP + Direct Attach cables or optical transceivers and optical cables:
fortyGig 0/33 on the MXL-A1 switch <-> fortyGig 0/33 on the MXL-A2 switch
fortyGig 0/37 on the MXL-A1 switch <-> fortyGig 0/37 on the MXL-A2 switch

Check on the stack wizard that the switches are pooled:

  MXL-A1 # sho system brief 
 Stack MAC: d0: 67: e5: cc: 8e: f0
 Reload Type: normal-reload [Next boot: normal-reload]
 - Stack Info -
 Unit UnitType Status ReqTyp CurTyp Version Ports
 -------------------------------------------------- ----------------------------------
   0 Management online MXL-10 / 40GbE MXL-10 / 40GbE 8.3.16.4 56     
   1 Standby online MXL-10 / 40GbE MXL-10 / 40GbE 8.3.16.4 56     
   2 Member not present                    
   3 Member not present                    
   4 Member not present                    
   5 Member not present                    

 MXL-A1 # sho system stack-ports 
 Topology: Ring
 Interface Connection Link Admin Link Link Link
                            (Gb / s) Status Status Group
 -------------------------------------------------- ----------------
   0/33 1/37 40 up up
   0/37 1/33 40 up up
   1/33 0/37 40 up up
   1/37 0/33 40 up up 


Additional stacking information for MXL switches can be found in this document .

If simplicity is not important, then more opportunities
About the impossibility of organizing a stack of six MI / O Aggregator, you should make a clarification - with the basic settings out of the box. When migrating via the command line, four operation modes are available for them:


In general, MXL and MI / O are similar, not only in appearance.

A few words from the practice of editing
Explain some terms and magic numbers. The height of the housing M1000e is 10U, they are suitable for tight installation. Under HPC (High Performance Computing) conditions, the free space in the server room and in each of the cabinets is as valuable and ends as quickly as the free space on the hard disk of the working computer. Therefore, it is considered good practice to use racks and cabinets with a height of 42U-47U. The rack as a whole is usually in some way a self-sufficient unit, even when it is not alone. Therefore, the heaviest equipment is mounted down - UPS, which usually takes up to 10U. Next come the server cases, and the M1000e fits in exactly 3 pieces. Optionally at a height of 20U with this installation can be installed console 1U high, retractable, for direct work with servers and other equipment. As we noted, small storage systems can be installed directly in the chassis; larger ones will take up another 2U-5U from above.

Convergence
Since both of the switches in question support FCoE and expansion cards with FC ports and can directly work with storage systems that support this data transfer protocol, using them you can directly connect such storage systems to a common converged network.

Both considered switches support FCoE convergence in FCoE transit mode, and when installing a module with FC ports, the ability to connect to an existing FC factory in NPIV Proxy Gateway mode, or for small installations such as "data center in a box" support connecting FC storage directly (see figure) . When using convergence, you can save on HBA in blade servers and FC blade switches in the chassis.



MXL Switch Configuration Example
Finally, we would like to give an example of a working configuration of a Dell Networking MXL blade switch installed in a Dell PowerEdge M1000e chassis.

The example will be useful to technicians who first set up Dell switches running Dell Networking OS 9.x for L2 scenarios.
Comments in the text are in italics.

MXL-A1 # show running-config
Current Configuration ...
! Version 9.8 (0.0)
[Indicates the version of Dell Networking OS running the switch]
! Last configuration change at Thu Jul 23 17:49:01 2015 by default
! Startup-config last updated at Sat May 16 04:52:23 2015 by default
!
boot system stack-unit 0 primary system: A:
boot system stack-unit 0 secondary system: B:
[The Dell switch is assigned a stack unit number. When Dell switches are not stacked, they are numbered “0”. If the switches are combined into a stack, their number will be indicated in accordance with the position in the stack: from 0 to 5.
The boot system command points to two partitions: A and B. Each partition can have its own version of Dell Networking OS. “A” is the primary and “B” is the backup partition. When booting, the switch automatically loads the version of the Dell Networking OS in the main partition, unless otherwise specified in the boot system command.
In Dell Networking OS, after a command is entered, its action is applied in the current (running) configuration file. To save the current configuration, copy the running configuration to another location (for example, to the switch non-volatile memory using the copy running startup command).]
!
redundancy auto-synchronize full
[Set by default. Used when stacking for master stack synchronization with a backup switch.]
!
hostname MXL-A1
!
!
username admin password 7 888dc89d1f1bca2882895c1658f993e7 privilege 15
username root password 7 d7acc8a1dcd4f698 privilege 15 role sysadmin
[Create local administrator accounts. Dell Networking OS also supports authentication and authorization on RADIUS and TACACS + servers. Because the “enable” or “enable secret” password command is not set, the network administrator will immediately get into privileged configuration mode (privileged exec) (hostname #) after successfully entering the password.]
!
default vlan-id 11
[Assign VLAN 11 as a native / untagged / default VLAN. Native VLAN is untagged.]
!
!
protocol spanning-tree rstp
no disable
bridge-priority 57344
[Configure 802.1w RSTP protocol. Dell Networking OS 9.x also supports PVST +, MST (802.1s) and STP (802.1D).
If the 802.1w RSTP domain of the Dell switches connects to the Rapid-PVST + domain of the Cisco switches, VLAN 1 must be enabled on both sides of the trunk port (or Port-channel) connecting the Dell and Cisco switches. This will allow the RSTP domain to converge using the Common Spanning Tree (CST), which uses VLAN 1.
In Cisco IOS and NX-OS, VLAN 1 is the only VLAN that allows Cisco rapid-PVST + protocol to communicate with switches from other manufacturers that use the RSTP 802.1w protocol. Only in VLAN 1, the Cisco switch sends Cisco proprietary BPDU packets and also standard BPDUs for convergence of the topology with other switches. The 802.1w switch uses standard BPDUs in VLAN 1. The proprietary Cisco BPDU packets are tunneled through a network of switches from other manufacturers to the next Cisco switch to ensure the convergence of the Cisco PVST + protocol.]
!
no iscsi enable
[This command disables the iSCSI optimization in Dell switches — these are Jumbo MTU settings, flow control, unicast storm control, and others, which are enabled by default in Dell Networking OS.]
!
vlt domain 2
peer-link port-channel 10
back-up destination 192.168.2.2
primary-priority 57344
system-mac mac-address 02: 01: e8: 00: 01: 02
unit-id 0
[These commands customize the functionality of Dell Virtual Link Trunking (VLT), an implementation of the Multi-chassis LAG technology. VLT allows you to combine two switches into a fault-tolerant configuration for connecting to other switches or servers using the Link Aggregation Group (LAG) and 802.3ad LACP protocol. Because the physical ports belong to the same Port-channel logical interface, the Spanning Tree protocol does not block individual ports and provides balancing across all physical ports in the group.
Two VLT switches form a domain number 1. In this example, the link connecting two VLT switches between themselves, called VLTi, consists of 2 40G ports (Fo0 / 33 and Fo0 / 37) connected in port-channel number 10 ( you can use any number in the range from 1 to 128). VLTi link is used to synchronize service information and all tables (MAC addresses and ARP tables, IGMP, etc.). This allows two VLT switches to function as one logical switch for the devices connected to them. One of the switches takes on the role of VLT primary (can be set using the primary-priority command), and the second role is secondary, this role further defines the behavior of the switch in case of network failure scenarios, as well as for processing STP BPDUs.
As a back-up destination IP address, the VLT Neighbor Management interface address is used. When you create a VLT domain on the switches, Dell Networking OS can assign the MAC address for the interaction itself and give the switches a unique unit ID. Using the system-mac mac-address and unit-id commands minimizes the synchronization time of the VLT neighbors after one of the switches is reset.]
!
stack-unit 0 provision MXL-10 / 40GbE
!
stack unit port 0 port 41 portmode quad
!
stack-unit 0 port 45 portmode quad
[The built-in 40GE MXL switch ports (Fo0 / 33 and Fo0 / 37) operate by default in 40GE mode. They are often used for stacking or for the VLTi link in the case of VLT technology. By default, the 40GE ports on the 2-port extension module for the MXL switch operate in 8 x 10GE mode (quad mode). If these ports are to be used for stacking, then they need to be converted to 40GE mode.

To transfer from 4-port 10GE mode to 40GE mode, use the commands:
switch (conf) # no stack-unit <switch number in stack> port < port number 40GE (33, 37, 41, 45, 49 or 53> portmode quad ]

stack unit <switch number in stack>: specify the switch number in the stack in the range of 0-5. To display the switch number in the stack, use the show system brief command.
port < port number 40G>: enter the 40GE QSFP + port number. The number of built-in ports 40GE switch MXL: 33 or 37; port numbers for slot expansion slot 0: 41 or 45; for slot expansion slot 1: 49 or 53
portmode quad : indicates that the 40GE port is operating in 4 x 10GE mode.]
!
!
!
interface TenGigabitEthernet 0/1
description internal server
no ip address
portmode hybrid
[Allows the port to switch tagged and untagged VLANs. Corresponds to Cisco command “switchport mode trunk”]
switchport
[Defines port as Layer 2]
spanning-tree rstp edge-port
[Allows the port to skip the STP Listening and Learning phases and immediately go into STP Forwarding mode. Used on ports to which servers are connected. Corresponds to the Cisco “portfast” command in Cisco IOS]
no shutdown
!
!
!
interface fortyGigE 0/33
description port-channel_member_10
no ip address
no shutdown
!
interface fortyGigE 0/37
description port-channel_member_10
no ip address
no shutdown
!
!
!
interface fortyGigE 0/49
description member port-channel 1
no ip address
port-channel-protocol LACP
port-channel 1 mode active
[Adds the Fo0 / 49 port to the port-channel 1 aggregation group and enables the LACP protocol for it to negotiate LAG aggregation settings between switches.]
no shutdown
!
interface fortyGigE 0/53
description member port-channel 1
no ip address
port-channel-protocol LACP
port-channel 1 mode active
no shutdown
[Adds port Fo0 / 53 to the port-channel 1 aggregation group and enables LACP on it.
The same settings for a group of ports can be made using the range command:
switch (conf) #interface range fo0 / 49-53]
!
!
interface ManagementEthernet 0/0
ip address 192.168.2.2/24
no shutdown
[This is the port for the dedicated switch management network that is connected to the internal management switch of the M1000e blade chassis. The iDRAC interfaces of the blade servers for management are connected to the same L2 switch. The chassis management controller (CMC) blade chassis module has an external RJ-45 port for connecting to a dedicated management network.]
!
!
!
interface Port-channel 1
description Port-Channel to S6000 switches
no ip address
switchport
vlt-peer-lag port-channel 1
[Configures port channel 1 as an L2 port, and also as a VLT port-channel. The VLT port-channel consists of physical ports that are distributed between two switches in the VLT domain, with all the ports active and balancing traffic across all ports in the port-channel without blocking STP ports.]
no shutdown
!
!
!
interface Port-channel 10
description VLTi link to 450-MXL-A2
no ip address
channel-member fortyGigE 0 / 33,37
[This example shows the static port channel configuration without using the LACP protocol. VLTi port-channel must be configured just static.]
no shutdown
!
!
interface vlan 1
no ip address
tagged TenGigabitEthernet 0 / 1-32
tagged port-channel 1
[Defines VLAN 1 tagged on Te0 / 1-32 ports (these are ports to blade servers) and port-channel 1. We do not recommend using VLAN 1 for any productive traffic, since it is used by control layer protocols, such as STP.]
no shutdown
!
!
interface vlan 2
description WAN_VLAN
no ip address
tagged TenGigabitEthernet 0 / 1-32
tagged port-channel 1
[Defines VLAN 2 tagged on Te0 / 1-32 and port-channel 1 ports.]
no shutdown
!
!
interface vlan 11
description vlan 11
untagged TenGigabitEthernet 0 / 1-32
untagged Port-channel 10
[Defines VLAN 11 untagged on ports Te0 / 1-32 and port-channel 10. Untagged traffic coming to the switch will fall into VLAN 11.]
!
!
[One of the differences between setting up switches running Dell Networking OS 9.x and Cisco switches is the configuration of VLANs on switch ports.

The configuration of VLAN 1, 2 and 11, above, defines the Te0 / 1-32 ports and port-channel 1 trunks. VLAN 1 and 2 are tagged, and VLAN 11 is configured as a native / untagged VLAN. On Cisco switches, the equivalent command configuration runs on physical ports or port-channel and looks like this:

interface Te0 / 1 (or Po1)
switchport mode trunk
switchport trunk allowed VLAN 1,2,11
switchport trunk native VLAN 11

If, for example, you need to configure the Te0 / 20 port as an access port in VLAN 100, on the Dell switch you can do this with the commands:

interface VLAN 100
description HR_Department_LAN
untagged TenGigabitEthernet 0/20

On a Cisco switch, this is configured with the commands:

interface TenGigabitEthernet 0/20
switchport mode access
switchport mode access VLAN 100

]
!
!
!
management route 0.0.0.0/0 192.168.2.1
[The default route for the management network with the next-hop IP address.]
!
ip domain-name dell.local
ip domain-lookup
[Setting the default domain name and enabling dynamic resolution for domain names.]
ip name-server ABCD
ip name-server EFGH
[Specifies the IP addresses of the DNS servers to which requests will be sent to resolve the domain names of the hosts.]
!
!
!
!
banner login ^ C

################################################## ########
# #
# WARNING #
# #
# This is a private device
# authorized personal. Use of this resource can #
# monitored and unauthorized use is strictly prohibited. #
# #
################################################## ########

^ C
!
ntp server ABCD
ntp server EFGH
[Server Addresses for Time Synchronization Using Network Time Protocol]
!
clock timezone GMT 3
[Defines time zone GMT +3.]
ip ssh server version 2
[Includes SSH server for remote management of the switch.]
!
no dcb enable
[Disables Data Center Bridging functionality, which is used in scripts for converged FCoE and iSCSI networks.]
!
!
!
protocol lldp
[Enables Link Layer Discovery Protocol. Switches running Dell Networking OS 9.x support LLDP instead of Cisco CDP.]
!
!
line console 0
line vty 0
line vty 1
line vty 2
line vty 3
line vty 4
line vty 5
line vty 6
line vty 7
line vty 8
line vty 9
!
reload-type
boot-type normal-reload
[The switch is configured in the normal boot mode. There is also an alternative boot mode called BMP, which allows the switch to automatically download the configuration file and OS from a remote server and simplifies the installation of a large number of switches. By default, the new switch operates in BMP boot mode. To switch to normal boot mode, you need to run the following commands on the switch command line:

FTOS # stop bmp
FTOS # reload-type normal-reload

]
!
end

Source: https://habr.com/ru/post/146096/


All Articles