Modern organizations are seeking to introduce new services and applications, but often an obsolete network infrastructure that is unable to support innovation becomes a stumbling block. Technologies based on open standards are intended to solve this problem.Today, IT has gained a strong position based on standards - customers almost always give preference to standard solutions. With the passing of an era when mainframes dominated, standards gained a strong position. They allow you to combine equipment from different manufacturers, choosing "best in class" products and optimize the cost of the solution. But in the network industry is not so simple.
Closed systems still dominate the network market, and compatibility of solutions from different manufacturers is ensured at the interface level at best. Despite the standardization of interfaces, protocol stacks, network architectures, network and communication equipment from different vendors is often proprietary solutions. For example, even deploying a modern Brocade Virtual Cluster Switch network fabric, Cisco FabricPath or Juniper QFabric involves replacing existing switches, and this is not a cheap option. What can we say about the technologies of the "last century", which are still working, but inhibit the further development of networks and the applications functioning in them.
')
The evolution of networks. From proprietary to open source solutions.Conducted in recent years,
studies show that there is a gap between the offers of network equipment vendors and the preferences of its customers. For example, according to one of the polls, 67% of customers believe that proprietary products should be avoided whenever possible, 32% allow their use. Only 1% of respondents believe that proprietary products and tools provide better integration and compatibility than standard ones. That is, in theory, most customers prefer standards-based solutions, but mostly proprietary network products are offered.
In practice, when buying new equipment or expanding the network infrastructure, customers often choose solutions from the same vendor or the same product family. The reasons are the inertia of thinking, the desire to minimize the risks when updating critical systems. However, standards-based products are much easier to replace, even if they are products from different manufacturers. In addition, under certain conditions, a combination of systems from different vendors
will provide a functional network solution for a reasonable price and reduce the total cost of ownership.
This does not mean that you should not buy proprietary, proprietary technologies that are not described by an open standard, but are the unique technology of a certain vendor. They usually implement innovative features and tools. The use of proprietary solutions and protocols often allows to get the best performance compared to open standards, but when choosing such technologies, it is necessary to minimize (and better to exclude) their use at the borders of individual segments or technological nodes of the network infrastructure, which is especially important in multi-vendor networks. Examples of such segments include access levels, aggregation or network cores, the boundary between the local and global networks, segments that implement networked applications (for example, load balancing, traffic optimization), etc.
Simply put, the use of proprietary technologies should be limited to their use within the boundaries of segments that implement specialized network functions and / or applications (a kind of typical "building blocks" of the network). In cases where non-standard proprietary technologies are used as the basis of the entire corporate network or large network domains, this increases the risk of the customer being tied to one manufacturer.
Hierarchical and flat networks
The purpose of building corporate data transmission networks (DATS), whether it is a network of a geographically distributed company or a data center network, is to ensure the operation of business applications. KSPD is one of the most important business development tools. In a company with a geographically distributed structure, business often depends on the reliability and flexibility of the joint work of its divisions. The principle of division of a network into “building blocks” lies in the basis of the construction of a SPT - each is characterized by its own functions and implementation features. Industry-accepted standards allow the network equipment of different vendors to be used as such building blocks. Private (proprietary) protocols limit the freedom of choice for customers, which results in limited business flexibility and increased costs. By applying standardized solutions, customers can select the best product in their area of ​​interest and integrate it with other products using open standard protocols.
Modern large networks are very complex because they are defined by a variety of protocols, configurations and technologies. Using a hierarchy, you can organize all components in an easily-parsed model.
The hierarchical model helps in the design, implementation, and maintenance of scalable, reliable, and, in value terms, integrated networks.
Three-tier corporate network architecture.The traditional corporate network architecture includes three levels: access level, aggregation / distribution and core. Each of them performs specific network functions.
The kernel level is the foundation of the entire network. To achieve maximum performance, routing functions and traffic management policies are moved to the aggregation / distribution level. It is responsible for proper packet routing, traffic policy. The task of the distribution level is to aggregate / integrate all access level switches into a single network. This can significantly reduce the number of connections. As a rule, the most important network services and its other modules are connected to distribution switches. The access level is used to connect clients to the network. Data center networks were built in a similar way.
Outdated three-tier network architecture in the data center.Traditional three-tier architectures are focused on the client-server paradigm of network traffic. With the further development of virtualization and application integration, the flow of network traffic between servers is increasing. Analysts
say (
here, too ) about the change in the paradigm of network traffic from the north-south direction to the east-west direction, i.e. to a significant predominance of traffic between servers, in contrast to the exchange between the server and clients.
When considering the network architecture of the data center, the access level corresponds to the boundary of the server farm. In this case, the three-tier network architecture is not optimized for transferring traffic between individual physical servers, because instead of reducing the packet transmission path to one (or maximum two) network layers, the packet is transmitted along all three, increasing delays due to parasitic traffic in both directions.
That is, the traffic between the servers passes through the levels of access, aggregation, network core and back non-optimal way, due to an unjustified increase in the total length of the network segment and the number of levels of packet processing by network devices. Hierarchical networks are not sufficiently adapted for data exchange between servers, they do not quite meet the requirements of modern data centers with high density of server farms and intensive server-to-server traffic. Such a network typically uses traditional loop protection, device redundancy, and aggregated connection protocols. Its features: significant delays, slow convergence, static, limited scalability, etc. Instead of the traditional tree topology of the network, it is necessary to use more efficient topologies (CLOS / Leaf-Spine / Collapsed), allowing to reduce the number of layers and optimize the packet transmission paths.
HP simplifies a three-tier network architecture (typical of traditional Cisco network architectures) to two-tier or single-tier.Now the trend is that more and more customers, while building their networks, are oriented towards building second-level data networks (L2) with a flat topology. In data center networks, the transition to it is stimulated by an increase in the number of server-server and server-storage system streams. This approach simplifies network planning and implementation, as well as reduces operating costs and the total cost of investments, making the network more productive.
In a data center, a flat network (L2 layer) better meets the needs of application virtualization,
allowing you to efficiently move virtual machines between physical hosts. Another advantage that is realized in the presence of efficient clustering / stacking technologies is the lack of need for STP / RSTP / MSTP protocols. Such an architecture in combination with virtual switches provides loop protection without using STP, and in case of failures, the network converges an order of magnitude faster than using traditional STP protocols.
The network architecture of modern data centers should provide effective support for the transmission of large volumes of dynamic traffic. Dynamic traffic is due to a significant increase in the number of virtual machines and the level of application integration. Here it is necessary to note the increasing role of various virtualization technologies of the information technology (IT) infrastructure based on the concept of software-defined networks (SDN).
The SDN concept is currently widely distributed not only at the level of the network infrastructure of individual sites, but also at the levels of computing resources and storage systems within both separate and geographically distributed data centers (examples of the latter are HP Virtual Cloud Networking - VCN and HP Distributed Cloud Networking - DCN).
A key feature of the SDN concept is the
integration of physical and virtual network resources and their functionality within a single virtual network. It is important to understand that although network virtualization solutions (overlay) can work on top of any network, the performance / availability of applications and services largely depends on the performance and parameters of the physical infrastructure (underlay). Thus, combining the benefits of optimized physical and adaptive virtual network architectures allows building unified network infrastructures for efficient transmission of large streams of dynamic traffic based on application requests.
HP FlexNetwork architecture
To build flat networks, vendors develop the appropriate equipment, technologies and services. Examples include Cisco Nexus, Juniper QFabric, HP FlexFabric. HP's core solution is the open and standardized
architecture of HP FlexNetwork.HP FlexNetwork
includes four interrelated components: FlexFabric, FlexCampus, FlexBranch and FlexManagement. HP FlexFabric, HP FlexCampus and HP FlexBranch optimize network architectures, data centers, campuses and branch offices, allowing you to migrate from traditional hierarchical infrastructures to unified virtual, high-performance, converged networks as you grow, or build such networks immediately based on reference architectures recommended by HP.
HP FlexManagement provides comprehensive monitoring, automation of deployment / configuration / control of multi-vendor networks, unified management of virtual and physical networks from a single console, which accelerates the deployment of services, simplifies management, improves network availability, eliminates the difficulties associated with the use of multiple administration systems. Moreover, the system can manage devices of dozens of other manufacturers of network equipment.
HP FlexFabric supports switching in networks up to 100GbE at the kernel level and up to 40GbE at the access level. It uses HP Virtual Connect technology. By implementing the architecture of FlexFabric, organizations can gradually transition from three-tier networks to optimized two-and one-tier networks.Customers can gradually transition from proprietary legacy networks to the HP FlexNetwork architecture using HP Technology Services. HP offers migration services from proprietary network protocols, such as Cisco EIGRP (although Cisco calls this protocol “open standard”), to truly standard routing protocols OSPF v2 and v3. In addition, HP offers FlexManagement administration services and a set of services related to the life cycle of each HP FlexNetwork modular “building block”, including planning, design, implementation, and maintenance of corporate networks.
HP continues to improve the capabilities of its equipment, both at the level of hardware platforms, and based on the concept of Software Defined Network (SDN), introducing various protocols for dynamic management of switches and routers (OpenFlow, NETCONF, OVSDB). To build scalable Ethernet factories, technologies like TRILL, SPB, VXLAN have been introduced in a number of HP network device models (the list of devices supporting these protocols is constantly expanding). In addition to standard DCB category protocols (in particular, VPLS), HP has developed and is actively developing proprietary technologies for efficiently combining geographically distributed data centers into a single L2 network. For example, the current implementation of the HP EVI (Ethernet Virtual Interconnect) protocol allows you to combine up to 64 data center sites in a similar way. The combined use of HP EVI and the HP MDC (Multitenant Device Context) device virtualization protocol
provides additional opportunities for expanding, improving the reliability and security of distributed virtualized L2 networks.
findings
In each particular case, the choice of network architecture depends on many factors - technical requirements for the SPT or data center, the wishes of the end users, infrastructure development plans, experience, competence, etc. As for proprietary and standard solutions, the first ones can sometimes cope with tasks for which standard solutions are not suitable. However, at the border of network segments built on the equipment of different vendors, the possibilities of their use are extremely limited.
The large-scale use of proprietary protocols as a basis for the corporate network can seriously limit the freedom of choice, which ultimately affects the dynamic business and increases its costs.
Open, standards-based solutions help companies migrate from legacy architectures to modern flexible network architectures to meet current challenges such as cloud computing, virtual machine migration, unified communications and video delivery, high-performance mobile access. Organizations can choose best-in-class solutions that meet business needs. Using open, standard protocol implementations reduces the risks and cost of network infrastructure changes. In addition, open networks, with combined physical and virtual network resources and their functionality, simplify the transfer of applications to the private and public cloud.
Our previous publications:
» Implementing MSA in a virtualized enterprise environment» HP MSA Disk Arrays as a Basis for Data Consolidation» Multivendor corporate network: myths and reality» Available HP ProLiant server models (10 and 100 series)» Convergence based on HP Networking. Part 1» HP ProLiant ML350 Gen9 - server with insane extensibilityThank you for your attention, we are ready to answer your questions in the comments.