📜 ⬆️ ⬇️

Changes in data centers: Technological solutions



/ photo Arthur Caranta CC

We in the IaaS-provider 1cloud team write a lot about the development of cloud technologies and our understanding of management approaches in the development and design of solutions for the provision of virtual infrastructure .
')
We are interested in not only new products, but also technologies that have left their mark on history. For example, we recently talked about the amazing computer ENIAC, which marked the beginning of a new era with its appearance.

A lot of time has passed since its birth, more advanced and powerful computing systems have appeared. We will talk about this today: about technological trends developing in the field of data management.

One of the reasons for improving data centers is to save money. Companies and startups are trying to create more efficient systems, optimizing the space and cost of maintenance they occupy. For example, to reduce cooling costs, some organizations build data centers in cold regions of the planet or underground, and Microsoft, in general, lowered the data center to the depths of the sea. Such data centers undoubtedly solve the problem of equipment placement.

Also, in an attempt to reduce the size of the space occupied by the racks and the associated operational and capital costs, Vapor IO entered the game, announcing in March 2015 a new approach to the development of data centers: the Vapor Chamber modular data center project.



/ photo vapor.io - Vapor Chamber

According to Cole Crawford, Vapor IO CEO, they managed to wrap the data center in a convenient case that is easy to ship and transport. One such cylinder with a diameter of three meters can accommodate six 42 U mounting racks with a total IT load of up to 150 kW, and is therefore well suited for sweeping in tight urban environments.

The Vapor IO website states that the 36 Vapor Chamber units can fit in the same space that is required to accommodate 120 standard mounting racks in a “hot” and “cold” corridor configuration, but the Vapor Chamber significantly reduces the utilization efficiency. electricity pue.

Among other things, Vapor IO developed its software called Core Vapor CORE (Core Operating Runtime Environment). The software allows data center operators to determine the performance of IT equipment using various metrics (whether it be the number of processed URLs or transactions).

This approach is similar to the one used by eBay to create its Digital Service Efficiency concept, introduced in March 2013. In this case, the development of Vapor IO is much more versatile.

The work of the Vapor Chamber is complemented by OpenDCRE (Open Data Center Runtime Environment) - the open API Vapor IO, giving applications the ability to integrate with any working data center. Information provided in real time allows data center operators to correctly determine the need for resources and allocate capacity.

Open Solutions - the Future of Data Centers

It seems that more and more global companies are starting to work with open technologies. In 2015, Microsoft adopted Linux, Apple showed the code for its latest and most popular programming language, and cloud services simply could not function without Linux. This big wave was raised by Facebook when it launched the Open Compute project.

Facebook is working on the creation of open hardware, immediately introducing new technologies in its data centers. The company uses all the advanced developments - SSD, GPU, NVM and JBOF, which is part of the company's new vision for creating a network of powerful data centers.

“For the next 10 years, we will be closely engaged in artificial intelligence and virtual reality technologies,” explains Mark Zuckerberg. “All this will require much more computing power than we have today.”



/ photo gothopotam CC

Facebook has completely redone its infrastructure. Instead of the usual dual-processor server, a single-chip system (SoC) based on Intel Xeon-D with lower power consumption appeared.

“We are working closely with Intel on the development of the new processor. At the same time, the server infrastructure is being reworked in order for the system to meet our needs and be scalable, ” write the company's representatives.

This uniprocessor server with a lower CPU power handles web-loading better than the dual-processor version. At the same time, the server infrastructure was rebuilt in such a way that the number of processors doubled.

“The working numbers of the new processor fully meet our expectations,” Facebook engineers say . “Moreover, a single socket server has less demanding heat sink requirements.”

All this allowed Facebook to create a server infrastructure in which it is possible to pack much more productive capacity per level, remaining within 11 kW per rack.

Also, Facebook shared its new approach to the use of GPUs, which have received increased attention in recent years. Initially, GPUs were used to increase the performance of desktop PCs when working with graphics, and today they are actively integrated into supercomputers to solve more complex problems.

The company uses the power of the GPU in its systems of artificial intelligence and machine learning. The corresponding laboratory within the company develops neural networks for solving specific problems. Obviously, this requires completely different levels of performance.

The Big Sur system, for example, uses the Nvidia Tesla accelerated computing platform with eight high-performance 300 W graphic processors. Facebook engineers optimized the power and heat output of the new servers, which allowed them to be used in the company's existing data centers in conjunction with classic servers. As a result, time for training neural networks has been significantly reduced.

Another area where Facebook has directed its stops is memory. The company has been using flash technology to speed up boot disks and caching for many years. Engineers replaced hard drives with solid-state drives, transforming storage from JBD (Just a Bunch of Disks) to JBOF (Just a Bunch of Flash).

The new JBOF-module in Facebook was developed jointly with Intel - it was named Lightning. By using the NVM Express protocols and the PCI Express interface optimized for SSD, we were able to achieve high speed. However, as stated in the company, this is not enough for them anyway.

The company is trying to find the answer in 3D XPoint technology developed by Intel and Micron. It is based on elements with a change in phase state and a switch with memory on elements of Ovshinsky, which allows you to get a memory with a volume larger than that of DRAM, and performance exceeding the capabilities of flash memory.

This technology is capable of generating a new category of non-volatile memory: fast enough to use the DRAM bus and capacious enough to store a large amount of data.

The developers of data center architectures are already experimenting with emerging new data storage technologies. For example, attempts are known to create a cluster of flash memory chips under the control of a chip that emulates a disk controller.

In the future, the structure of data warehouses will change dramatically: DRAM will become a capacious array of high-performance flash memory, which will allow to “bypass” the operating system and the hypervisor, working with server DIMMs in direct access mode. Due to this, the latency will be significantly reduced, and data storage devices can be located near the servers.

Open Compute is changing the data center and cloud technologies market. Joint efforts of large representatives of the IT market standardize and reduce the cost of developing server solutions. The main goal of open projects is to create the most efficient and scalable server systems with low maintenance and power consumption. So far, it is safe to say that everything is going in this direction.

Source: https://habr.com/ru/post/305034/


All Articles