📜 ⬆️ ⬇️

How data centers work: today and tomorrow

The future of data centers lies in cloud solutions, hyper-converged infrastructure and more powerful hardware.



A data center is a space that an enterprise uses to store applications and information that are crucial for the company's operation, therefore, as they improve and develop, it is important to think over all the details regarding maintaining reliability and security for a long time.

Equipment in the data center


Usually, many people think that a data center looks like a huge one-piece mechanism; however, in fact, a data center is a mixture of technical elements, such as routers, switches, security devices, storage systems, servers, application controllers, and more. All these components are necessary IT for the storage and management of the most important systems that are essential for the continuous operation of the company. Therefore, the priority tasks are reliability, performance, security and continuous improvement of the data center.
')

Data Center Infrastructure


In addition to technical equipment, the data center also needs a significant infrastructure of facilities in order to constantly maintain the hardware and software. These include power subsystems, uninterrupted power sources (UPS), ventilation and cooling systems, redundant generators and cables for connecting to external network operators.

Data Center Architecture


Almost every large company is more likely to have several geodistributed data centers, i.e. in several regions. Thus, it allows an organization to use more opportunities to support and back up information, and also to provide protection from natural disasters or man-made disasters, such as floods, storms, and terrorist threats. When creating a data center architecture, there are always a large number of contradictions, because there are almost limitless possibilities. Here are some of the main questions:


Answers to these questions will help you make an intelligent decision about how many data centers you need to build and where. For example, a financial services firm in Manhattan is likely to need continuous operations, since any interruption can cost them several million dollars. Most likely, the company will decide to build in close proximity, in cities such as New Jersey and Connecticut, two data centers that will duplicate each other. The work of one of the data centers can be stopped without any interruptions in the work of the company, because the work will be carried out using another data center.

However, a small firm that provides professional services may not need instant access to information, so the best solution for it would be to place the main data center directly in your office and back up information from all of its branches to an alternative site at night. In the event of a failure, the process of information recovery will begin, but there will no longer be such urgency as compared to a business, where data is always needed in real time to compete successfully in the market.

Although data centers are often associated only with huge companies and cloud service providers, in fact, any company can have its own data center. For small businesses, the data center may be a small room, located, for example, in an office space.

Data Center Standards


In order to make it easier for IT managers to decide on the infrastructure of the data center, the US National Standards Institute (ANSI) and the Telecommunications Industry Association (TIA) in 2005, standards for data centers providing 4 levels of reliability. On the official website of the approved standards, you can find a complete description of the requirements for each level of data centers.

Technologies of the future


Along with all IT industries and technologies, data centers are also at the stage of significant technical progress, therefore data processing centers in the future will not be at all like modern data centers.

Small and medium businesses are gaining momentum, becoming more dynamic and decentralized, and therefore the technologies used in data centers should become more flexible and scalable. Also, server virtualization is becoming increasingly popular, which creates a new load on the network and changes the nature of the traffic - traditionally for the data centers there was a north-south traffic (data exchange between the user and the data center), and recently more and more data volumes are transferred inside the data center, i.e. East-West traffic is increasing.

We present key technologies that transform traditional strict and static data centers into flexible and maneuverable, which in turn will help them to satisfy any requests of digital organizations.

Public clouds


Traditionally, enterprises have always built their own data centers, using the services of a hosting or external management provider. This approach made it possible to change the ownership structure and economic component of the existing data center, however, it still took a lot of time to implement and organize technical solutions. However, the growing popularity of cloud services and the Infrastructure as a Service (IaaS) model provided by cloud providers such as Amazon Web Services or Microsoft Azure opened the way for businesses to provide the company with a virtual data center in the cloud with a couple of mouse clicks. Data from the analytical company ZK Research showed that more than 80% of companies are planning to use a hybrid approach, which implies both the use of public clouds and private data centers.

Software Defined Network (SDN)


The flexibility of the digital business directly depends on its not very flexible component - the network. SDN can bring the dynamics to a new level, previously unknown.

Hypercovergent Infrastructure (HCI)


One of the priorities of any data center is to create the right mix of servers, storage and networks to support applications with high requirements. After the infrastructure is deployed, you need to figure out how to scale it quickly without disrupting the application. HCI greatly simplifies operation by providing an easy-to-deploy device based on standard hardware that can be scaled by adding additional components during deployment. Previously, HCI was mainly used for desktop virtualization of users; however, recently this approach has been launched in other business applications, such as unified communications and databases.

Containers


When developing applications, it is often necessary to suspend work during the creation of the infrastructure where this application will subsequently be launched. This can significantly prevent organizations from moving to the DevOps model. Containers are a virtualization method for the entire work environment that allows developers to run applications and their dependencies in a stand-alone system. The containers are very light and can be quickly created and removed, so they are ideal for testing applications under certain conditions.

Microsegmentation


Standard data centers have all the basic security technologies, so when traffic moves towards North-South, it passes through all the protection tools, making the work safe. The growth of traffic in the direction of East-West means that traffic bypasses firewalls, intrusion prevention systems and other security systems, which in turn allows you to quickly spread malware. Microsegmentation is a method of creating safe zones in a data center, where resources are isolated from each other, so if there is a gap or leak somewhere, the damage is minimized. Microsegmentation is usually performed in software, which makes it very manoeuvrable.

Nvme


In the modern world, where all information is already fully digitized, there is a need for data transmission at very high speeds. Traditional data storage protocols, such as the Small Computer System Interface (SCSI) and the advanced technology application (ATA), have been around for decades and are working to the limit. NVMe is a data storage protocol designed to speed up the transfer of information between systems and solid-state drives, which significantly increases the speed of data transfer.

Graphic Processors (GPU)


The central processors (CPUs) have supplied infrastructure to the data center for decades until Moore’s empirical law has reached its limit. In addition, modern workloads, such as analytics, machine learning and IoT, necessitate a new type of computational models, which significantly exceed the capabilities of existing processors. Graphic processors, previously used only for games, function fundamentally differently, since they are capable of processing multiple threads in parallel, which makes them ideal for the data center of the future.

Data centers have always occupied and will occupy an important place in any business, regardless of its size. However, the methods of deploying data centers and the technologies used in them have undergone tremendous changes. To implement the ideal scheme of the data center, remember that the world is becoming more dynamic. Technologies that have an impact on the important changes taking place in the IT sphere are the technologies of the future.

Address of the original article: How a data center works today and tomorrow

Source: https://habr.com/ru/post/343282/


All Articles