Good day everyone. Today we would like to discuss the technologies and tools that ensure the “programmatic determination” of the network part of data centers.
First of all, when building an SDDC, it is worth thinking about macro-virtualization technology. It implies a working tandem of a virtual machine management system (VM), DCIM (Data Center Infrastructure Management) and SNMP adapters (Simple Network Management Protocol). DCIM with the help of adapters collects and aggregates information about the state of engineering infrastructure of data centers, availability of free space, as well as space in racks. The system allows you to get the most complete picture of what is happening in the data center of the provider company "here and now." If you add information from the VM management system to the DCIM data, this will allow you to identify problem points in the data center (exceeding temperature thresholds, power shortages, etc.) and transfer the load to one or another physical zone of the data center (or to another data center) on processor power. The company will be able to migrate high-loaded virtual machines closer to suitable elements of the engineering infrastructure. And in the zones of “hot server passions” it will be the coldest.
Thus, the physical data centers of the provider, combined with macro-virtualization technology, are transformed into a single “ecosystem”, which, among other things, has increased reliability. Here, the proverb is appropriate: “You can't break a broom, and you break everything one by one”: with a low level of reliability of the engineering infrastructure of individual data centers, they all provide full mutual redundancy.
')
Naturally, you cannot get by with macro-virtualization in the case of SDDC. The concept of a software-defined data center organically lays down a steady tendency to reduce the physical complexity of the infrastructure and transfer the complexity to a virtual, software environment. One of the manifestations of this approach is the idea of network function virtualization (Network Function Virtualization, NFV). Now there are both Open Source and commercial implementations in the virtual environment of network devices - switches, routers, load balancers, firewalls, etc. In the foreseeable future, we can expect that only servers (they are distributed storage systems) and high-performance switches with a limited set of functions will remain in the data center. And such an infrastructure will provide the overwhelming majority of the requirements of modern software, which is initially designed for cloud application and virtualization.
Why is this trend steady and what are the benefits of NFV? This is how the CEOs of 220 companies from various sectors of the economy surveyed by the SDNCentral portal (see Fig. 1) answer this question.
Fig. 1. The advantages of NVF (source - SDNCentral Report on Network Virtualization 2014)From the survey data it can be seen that the indicator of flexibility leads with almost a three-fold margin: a virtual server is much easier to acquire, scale, etc., than a physical device. While the indicators of capital and operating costs, although noted by respondents, are not in leading positions.
If you specify the survey indicators, you can note the following benefits:
- the ability to create a personal network environment for each application or subscriber in the cloud infrastructure;
- reducing the range of devices;
- reducing the cost of maintenance and operation of the IT infrastructure. Virtual network devices “live” in a virtualized environment on standard, same-type servers, so their maintenance does not require a large nomenclature of spare parts and accessories and a separate rack space;
- reduced delivery times: NFV is software and licenses, the delivery of which is much simpler than the delivery of equipment;
- simplicity of replication and scaling, while they are quite easy to automate;
- small time and ease of recovery. Backup copies of virtual network device configurations or just images of their virtual machines can be recovered in minutes on other servers.
Thanks to the use of NFV, the provider in a matter of minutes can provide the subscriber with the required number and nomenclature of network devices, which he independently adjusts in accordance with his unique requirements. By the way, the ability to fully manage the settings of their leased network devices qualitatively distinguishes the service from the option when the provider configures the network itself: this does not give the customer control over the settings, repeatedly complicates the network operation for the operator and as a result goes much more expensive for everyone.
In a situation where there is a junction between the physical and virtual networks, the problem of managing and configuring such a “hybrid” network immediately arises. Traditional methods of network management, as a rule, are not satisfied either in terms of functionality or in terms of the fleet of supported equipment. Therefore, the IT industry began to look for "workarounds". And really, instead of reconfiguring the physical network in each individual case, why not leave it alone, once setting it up and demanding only one thing - reliability? As a result, a new type of networks was created - Overlay Networks, or logical networks created on top of the physical ones. From a network point of view, this has been done before: the well-known IEEE 802.1q (VLAN) standard falls within this definition. The difference is that overlay protocols work in routed networks and provide significantly more tags to identify networks (usually 16 million). In general, the overlay network is created using software or hardware switches (gateway) and tunneling protocols (VXLAN, NVGRE, STT, GENEVE).
Thanks to overlay networks, the virtual environment administrator configures the tunnels between the virtual machines without making any settings on the physical switches. Overlay network allows you to provide the necessary services for applications on top of any reliable network infrastructure. Its advantages:
- ensuring the connectivity of virtual machines in different network segments and even in different data centers;
- improving the stability of the network due to the ability to use a routed network as a transport;
- possibility of collaboration in the virtual computing infrastructure, interaction with NFV-devices;
- All overlay protocols operate on the principle of encapsulation and use the standard Ethernet frame format, therefore full compatibility with the existing network infrastructure is ensured.
The disadvantages of the technology include the use of Multicast (requires Multicast support in the physical network infrastructure) and an increase in server utilization due to the need to encapsulate-decapsulate data overlay.
One of the "bricks" of the software-defined network environment is also Software Defined Networking (SDN) technology. Encouraged to use the NFV approach for high-performance network switches, which provide the integration of servers into a single infrastructure, SDN finds itself all new supporters.
Fig. 2. Software-defined network architectureThe main idea of SDN is the separation of the control and transport functions of the network infrastructure. The whole “intelligence” is focused on a separate hardware / software base - a dedicated control SDN controller: it determines the operation of the network based on the specified rules. The switches at the same time perform basic operations with packages and lose most of their intellectual functions. Traffic management - the interaction of the controlling controller and switches - is carried out using special protocols (the most promising and actively developing - OpenFlow), operating with the notion of “flow” (flow). Through them, various actions with traffic are carried out - prohibition, permission, redirection, etc. SDN provides network management flexibility and greatly simplifies its administration.
But the main zest of the SDN is still different: the SDN controller should have the means of integration with the orchestration systems and, in the future, with the applications. This will ensure the management of network resources based on actual requests from information systems. For example, the network will dynamically allocate a wider bandwidth for the duration of the video conferencing session, and then redistribute it in favor of other applications. Understanding that not only packages and streams exist in the network, but also applications are one of the key features of SDN networks, ensuring their future in the corporate world. And it is the centralized SDN architecture that makes this task possible.
Motion limiters
The technologies SDN, NFV, DCIM, etc., are of interest to many Russian companies - providers of IT services, but there are still no full-fledged SDDC implementations in our country. There are several fundamental reasons for this.
So, there are no ready-made solutions on the market that allow for the integration of DCIM-systems and virtual machine management software. Companies will have to resolve this issue themselves with the help of their IT specialists or a partner system integrator team. Yes, and the choice of DCIM causes certain difficulties. Currently, this software is offered by two groups of manufacturers. The first includes vendors that have historically specialized in creating solutions for the data center engineering infrastructure. When building a data center's physical asset management system, they go “from the bottom” - from collecting detailed data on the status of the components of the “engineer”.
The second group includes manufacturers developing solutions for integrated IT infrastructure management. Such systems have broad functionality and are designed for detailed inventories, planning for equipment placement in the data center, forecasting, operational monitoring of power consumption, etc. That is, they solve the problem “from above”. In this case, the choice of a decision depends on the specific conditions. The company should decide on what system is needed specifically for it, taking into account the state of the infrastructure and development plans, conduct RFI, develop KPI, create a short list and perform test tests. All this translates into significant time and labor costs. It is logical that many providers put it on the back burner, pushing aside the transition to the software-defined environment.
If we talk about the technologies of NFV, SDN, Overlay Networks, then their distribution is hindered by their novelty and lack of full-fledged, ready-to-implement solutions. In the data centers of companies lives familiar, long-familiar network hardware, IT professionals know its pros and cons, unlike the behavior of virtualized network devices. A paradigm shift requires additional financial investments, but companies have already spent on building a traditional network. At the same time, it’s not particularly necessary to rely on the possibility of using Open Source SDN controllers: Open Source software for SDN is for the most part a “workpiece” that needs to be improved, primarily by programmers, which is not affordable for every company. Market expectations about the “cheap” SDN switches are currently not justified: TCAM (Ternary Content Addressable Memory) to support the required number of threads is an expensive component, respectively, affects the cost. In addition, vendors, for natural reasons, make decisions far from “Open”: none of the serious manufacturers will miss the opportunity to bind the client company with proprietary modifications.
On the other hand, hardware solutions will sooner or later require a replacement due to physical and moral depreciation, so when planning a further development of the IT infrastructure of a company, it is worth taking into account the prospect of expanding its virtual level. For this process, you must first prepare, checking the various components and options for implementing solutions.