This article will discuss the current trend in data center networks: the physical or logical separation of the processing and control planes from the data transfer plane (software-defined networks) and the advantages of this approach.
The virtualization technologies of the computing environment and data warehouses are firmly entrenched in the data centers, are well-established and work well. But in a networked environment, both before and now there are various difficulties:
- Static or manual allocation and redistribution of network resources;
- Separate configuration of each network device with a large number of them;
- The complexity and resource intensity in the implementation and change of network policies, configurations, new services;
- Multi-vendor and proprietary features;
')
Many large vendors, incl. IBM, see the solution to the above problems in the use of software-defined networks that completely change the economy and experience of implementing IT systems.
Consider a new approach in more detail. In a typical telecommunications network device that performs switching or routing functions, the following three tasks are simultaneously performed (let's call them planes):
Machining plane
Participates in creating information about the network topology, i.e. switching tables (Forwarding Information Base (FIB)) at level 2 of the OSI model and routing tables (Routing Information Base (RIB)) at level 3 of OSI. Such tables are created mainly by protocols that create a network map, for example, OSPF for routing or Spanning-Tree for switching data. The management plane is also responsible for implementing quality of service and security policies.
Data plane
Performs packet or frame forwarding to a specific interface or port based on the RIB or FIB tables.
Control plane
Performs monitoring and control of data processing and transmission planes.
Figure 1. Typical network nodeHistorically, in telephone networks, there was the concept of a single traffic processing plane and was tied to telecommunications devices and served as a signaling function (i.e. providing call setup and call disconnection), which could be performed both in one voice communication channel and in separate channels. .
The infrastructure of corporate switching networks is undergoing the same transformation from independent processing planes on each device to a single, integrated one. If today the data and processing planes exchange information within the same network node using an internal switching fabric, then the modern approach in the industry is to spread this separation on networks of data centers of any size, where the factory becomes responsible for communication between data transfer planes and traffic handling.
The first mention of the separation of the processing and data planes appeared on the Hi-End corporate routers and switches and was carried out on two separate processors: one for the processing plane and the other for data transfer, which significantly improved the performance of such devices. Today, the processing plane runs on basic hardware, which is an overly programmable structure, while the data transfer plane runs on specialized chips (note ASIC), optimized mainly for packet forwarding.
The next evolutionary step in the need to differentiate the planes comes from the fact that if network devices use separate processing planes (as shown in Fig. 2), this may lead to their non-optimal performance due to the occurrence of overhead costs (additional loads) that can affect on data traffic.
Problems may also arise due to their insufficient or asynchronous coordination, which leads to the idea of consolidating processing planes to a single point of building a network topology.
Fig.2. Independent traffic processing planesThe final step on the way of complete separation of data transmission and processing planes within the data center network and its switching infrastructure is shown in Figure 3.
Fig.3. Centralized traffic processing planeWith this approach, we have the existence of two parallel networks: one for traffic transmission, the other for its processing, using external signaling mechanisms.
The main conclusions and advantages of this approach are as follows:
- Operations on the network distribution of resources for both physical and virtualized data center networks are automated. Reduced network operating costs. This is especially true for virtualized networks, where there are frequent mobility and regular changes when migrating virtual machines;
- Significantly reduces the complexity of network configurations, for example, when activating security policies, quality of service, routing, filtering or authentication;
- Management becomes centralized;
- Downtime in a networked environment is significantly reduced;
- It is possible to avoid non-optimal traffic paths due to the approach described above, since A centralized traffic processing plane can make a decision about how to choose a path based on performance, availability, and other advanced parameters on the network;
- There is no need to use the Spanning-Tree protocol to exchange network topology information among network devices;
- With a single traffic processing plane, using link state algorithms (such as IS-IS, based on Dijkstra's algorithm, like OSPF) and distance-vector algorithms is the best choice, because It is imperative that the centralized controller can see the full picture of the network, and not a partial one.
Dijkstra as OSPF) and distance vector algorithms are better choices because It is imperative that the centralized controller can see the full picture of the network, and not a partial one.
The disadvantage is that a centralized switching controller can be a bottleneck in terms of performance and a potentially single point of failure. Reservation solves here only part of the problem.
The above approach becomes the industry standard and makes it possible to interact in the multivendor world. An open protocol that is widely used is called OpenFlow. It describes how to separate the processing and data planes. The controller and open flow switches use this protocol to communicate. Giving increased performance for x86 equipment, the OpenFlow controller, in general, is a standard server. In a simplified form, the OpenFlow architecture is shown in Figure 4. IBM, one of the first manufacturers, began supporting OpenFlow in its network software and hardware products.
Fig.4. OpenFlow network data center architectureNaydenov Andrey, an expert on designing a network infrastructure company IBM.