Convergence based on Hewlett-Packard Networking products.
Part 1 - a theoretical overview.
"... The main reason for the emergence of convergence - the desire to
reduce the cost of creating some very complex and
expensive objects with the acquisition of new quality
the end product or service or the expansion of their spectrum "
Leonid Kolpachev
')
Today, networks are divided into two large blocks - these are storage networks and local area networks or data networks, so called, historically. What is convergence and what is its purpose? The goal of convergence is to merge the two infrastructures into one and make a common network. What for? To reduce costs. Both capital (less equipment is needed) and operating rooms (because the equipment is smaller and it is homogeneous, it is easier and cheaper to maintain it). Are there any options not to consolidate the network? Of course have. If the issue of cost reduction is not on the agenda, then we can continue to develop two infrastructures in parallel, technologically both networks today meet modern requirements.
Let's talk about technologies for building converged network solutions and about different options for building converged networks in a data center based on HP equipment. In the first part I will briefly recall the theory of what FC and FCoE are.
FC is a high-speed connection protocol between servers and various storage systems, designed to provide reliable, bidirectional data transfer. It allows you to transfer data over long distances, up to 10 km and supports encapsulation of SCSI, FICON and TCP / IP protocols.
The main functions of FC are to control the flow of storage traffic. To ensure guaranteed delivery of traffic without loss, B2B (Buffer-to-Buffer) Credit Mechanism is used. Simplified and briefly, the operation of this mechanism can be described as follows: the switch assigns a certain amount of credits to the traffic source during connection initialization, which then decreases as the traffic is transmitted. Traffic transmission stops if the number of credits assigned at the source becomes zero until the switch sends the R_RDY packet to the server or the storage system. At the same time, the number of credits on the transmitting device increases and traffic transfer resumes. Device addressing in FC / FCoE is performed using WWN (World Wide Name) as a unique device FC identifier and FC_ID issued by the factory when the device is registered at the factory and that are in the FC packet headers and traffic is routed to the FC factory. Dynamic routing in FC is done using the FSPF protocol (like OSPF to IP), it supports multipath routing and works only inside the factory. Access control in FC is based on VSAN, the so-called virtual SAN network, you can draw an analogy with VLAN in Ethernet networks and on the basis of zoning, which allows you to control access to resources in much the same way as access control lists (ACLs) in Ethernet
Now a few words about FCoE, what it is. This is an FC frame encapsulation technology in Ethernet. FCoE is a protocol on which the convergence of networks in the data center is based, it is an attempt to “reuse” the existing standards of local area networks and storage networks to meet the needs of both PDAs (data networks) and storage systems (storage networks).
In order to provide lossless data transmission, FCoE must work on a fundamentally different transport, so the so-called Lossless Ethernet (or sometimes referred to as Converged Enhanced Ethernet, CEE) is invented, which carries the flow identification and control mechanisms (Policy-based Flow). Control), allows you to manage traffic processing priorities (Enhanced Transmission Selection), manage network congestion (Congestion Notification). Partly Lossless Ethernet allows to provide the required reliability as a standard mechanism in FC allows. Without it, FCoE can also work, but the required level of reliability in this case is much more difficult.
The FC protocol is simple, the application generates SCSI commands, they go to the FC stack, “wrap” into the protocol and are transferred to the Host Bus Adapter, go to the FC network, there the FC_ID switch transfers them to the appropriate storage system. Similarly, it works in the opposite direction. But the operation of FCoE is somewhat different from FC, here the traffic at the application level is immediately divided into two parts, requests for access to the storage system go through SCSI commands to FC, and access to network resources goes through the TCP / IP stack, on a convergent adapter these two traffic converge, wrapped in Ethernet and transmitted to the network. Further, the usual Ethernet network traffic is processed in a standard way, and FCoE arrives at the switch with FCoE support, this switch breaks it down to the FC level and, based on this data, switches the traffic to the appropriate storage system with FCoE support.

Now briefly about the main types of ports that are used in FC / FCoE: the ports between switches of a factory are called E-ports, the ports between the factory and consumers / traffic generators are F and N-ports, respectively, the ports between the factory and the proxy switch are NP -ports In FCoE, ports are named in a similar way, only the letter V is added (from the word Virtual) - VN, VE, VNP.

To summarize, some basic FC / FCoE concepts:
• When a device is connected to the FC network, the factory registers it and gives it the FC_ID, which will then be used to switch traffic from this N port, this is the process of the so-called login to the factory. When this happens, the B2B mechanism (buffer-to-buffer) credits are initialized.
• VSAN - used for logical separation of a factory based on physical ports, in fact for virtualization. As I said, this is, in fact, an analogue of VLAN on Ethernet.
• Zoning is an access control mechanism in FC / FCoE, an analogue of bidirectional ACL, which allows you to isolate devices from each other.
• Similar to Ethernet, VSAN is a virtual network, and Zoning is an ACL, an access control list that, on this virtual interface, restricts access, inside VSAN.
• Routing in FC / FCoE is performed using the FSPF protocol, which is essentially similar to OSPF in IP. It works only on the ports of the factory (E-ports).
In order for FCoE to work normally (and FCoE is a data plane protocol), a control plane is needed, which is implemented in FCoE using the FIP (FCoE Initialization Protocol), which implements search and login services to the factory, etc. . It must be remembered that these are two different protocols, although they are defined in the same standard FC-BB-5.
FIP Snooping Bridge is a switch that stands between the factory and end devices (Node) and monitors the connection of the Node to the factory (for example, it looks at the VLAN with which FC-MAP frames go and whether it coincides with what the factory has assigned) .

FCF is, in fact, a factory that implements all FC services (on which node logs in, receives FC_ID, etc.) and which passes traffic between Node. The differences between FCF and FCB FSB are obvious, I have already said them - the factory implements all FC services and switches FC traffic in accordance with FC_ID and settings. FSB FCB listens to traffic, supports Lossless Ethernet standards and checks the process of connecting Node to the factory. Without a factory, he cannot provide work, he definitely needs a factory on upstream.

Concluding the theoretical part, let's talk about the important mechanisms of NPV and NPIV - what is NPIV, NPV and why it is needed. A switch in NPV mode is a proxy that allows you to hide the allocation of several FC_ID to one N (Node) port. At the same time, the NP port connects to the F port and functions as a proxy for N switch NPV ports, which is especially important when the number of FC switches in a domain is limited. The mechanism that allows the allocation of several FC_IDs per N-Port is called NPIV. The N-port corresponds to the N-Port-ID and there is a one-to-one correspondence between WWPN and N-Port-ID. Where and why it is needed - first of all, it is necessary where there are several applications that use access to the FC factory and you need to separate for them one Host Bus Adapter and delineate access to resources. Most often, ToR switches act as NPV switches, concentrating traffic from the rack or blade switches. They log in to the factory, and logins (FLOGI) from the immediate node are replaced with FDISC and thus proxy FC traffic. This allows you to save Domain ID, because switch one that allows you to better scale the network. In addition, this mechanism gives the switch the ability to interact with equipment from other manufacturers.

A few words about how to assemble a ready converged solution from HP equipment. Hewlett-Packard has an extensive portfolio of data center switches that support FC / FCoE technologies and, above all, a 5900CP converged switch with support for the full FC / FCoE stack. This switch is not new (“run-in”), with the ability to change the flow direction of the blower, low port latency, high performance, support for 40G uplink ports and stacking in the IRF factory (up to 9 pieces, with a stack connection bandwidth of 320 Gbps /with). The stack allows you to fully realize the concept of Pay as you grow, i.e. you add equipment to the stack as your demand grows, rather than paying the full amount at once. The switch supports convergent transceivers that can operate in two modes — Ethernet mode and FC / FCoE mode, and non-convergent transceivers that cannot be “turned” into Ethernet from FC and vice versa.
This diagram shows how your converged data center might look like - a virtual switch 5900v starts up in the blade chassis, which connects to the ToR switch 5900 series, then the ToR connects to the data center switching core - 12500, 12900 or 11900. Outside and between sites the traffic goes through HSR 6600 Series or 6800 Routers

Finally, let me remind you once again that the key moment of HP Networking licensing policy is that switches come with full-featured software and do not require a license to activate FC / FCoE functionality, as well as TRILL, SPB, DCB, etc.