📜 ⬆️ ⬇️

Back to the future of data centers



What does the future have for us? In which world will we live in 20-30 years? The future is so exciting and uncertain. As in antiquity, and now the flow of people coming with questions to oracles, magicians, visionaries, does not dry out. Often, people are interested in even not so much tomorrow, but a far more distant future, immense in terms of the length of human life. It would seem, "dope complete," but so is the human nature. Beginning in the 19th century, new visionaries came to the place of exalted elders - scientific fiction writers.

Reviewing the other day, the second part of the gorgeous trilogy “Back to the Future”, which was released on a wide screen in 1989, noted with bitterness that in the upcoming 2015 year, the streets of our cities would not be flooded with flying machines, household energy reactors processing garbage, - Utopia, holographic cinemas - an unattainable myth, even a dustproof cover for a magazine over the past quarter century, have not been invented. At the same time, in some areas of our daily life, foresight was fairly accurate.
')


Reflections inspired by the above, and pushed more in detail to deal with those innovations that have affected the data storage centers. Yes, for many years now, the same Intel has been updating its processor lines, the software also does not stand still, but does this give us the right to talk about some serious changes in the functioning of data centers? Let's take a closer look at those innovations that we hear about a lot over the last 10 years. At what stage of implementation are they now?

Low energy efficiency ratio (PUE)




When in 2007 the Green Grid organization only introduced its measure of energy efficiency (PUE), the National Laboratory for them. Lawrence Barclay published a report on the results of a study of 20 data centers, which showed that the average value of the PUE coefficient was 2.2. Obviously, companies are not always interested in bringing the value of PUE to the lowest possible, energy efficiency is associated with large financial investments and technical difficulties that create additional problems in the construction of new data centers. But in this direction a good example is shown by such IT giants as Google and Facebook. Their data centers are characterized by PUE in the area of ​​1.2-1.3, which leads the company to the leaders in energy efficiency. At the same time, the material published in 2013, based on information from an analysis of global research organizations, showed that at the time of the study, the average PUE level in the existing data centers in the world is slightly more than 2.0. The solution to the problem of energy efficiency of data centers is a primary goal that must be solved to reduce the cost of operating the IT infrastructure.

Fight against downtime




For a long time, information systems engineers came to the idea that all network equipment should work with a maximum performance of 24 hours a day. At the moment, we are faced with the fact that due to the widespread availability of server platforms, we have a lot of isolated clusters that work permanently, exist solely to support the work of specific programs or some network options, which in turn leads to the existence of a huge number of servers that most of their time is simply not involved. A report published by research company McKinsey showed that in the data centers where the study was conducted, the percentage of aimless idle servers reached 30%. Although much has been done over the past 10 years in the direction of virtualization, which has contributed to reducing server downtime, the task is still quite acute. Large-scale Internet companies are already quite effectively using virtualization, and this is one of the factors for increasing the profitability of doing business. According to the results of an assessment carried out in 2014 by the marketing company Gartner, up to 70% of the workloads attributable to x86 platforms relate to virtualization systems. In the meantime, the owners of data centers will not bring the simple existing capacity to a reasonable minimum, the servers will continue to devour such valuable electricity.

Water cooling




This kind of cooling, of course, was not invented today. As early as the 1960s, liquid-based cooling schemes were used for the smooth operation of massive energy waste computers. For modern super computers, liquid cooling is also not new, but the use of this kind of technology in data centers is rare. Often these are experimental installations, the use of such systems is rather single. At the same time, the theory tells us that liquids can be thousands of times more efficiently removing excess heat from the working equipment of server cabinets, rather than air. But, unfortunately, the owners of data centers, both large and not so, still fear to widely use this technology due to the need to change the already existing infrastructure, change the working culture of staff with equipment placed in a liquid medium, the lack of examples of large data centers with a functioning such system. A successful example of liquid cooling can serve as the National Renewable Energy Laboratory (Perigrin, USA), where the data center created in 2013, providing a supercomputer located in the same place, shows PUE at a phenomenal 1.06! Also installed system in the city of Perington during the cold months can direct the heat released from the equipment to the heating of the campus of the National Laboratory. Such positive examples should be the impetus for the widespread use of liquid cooling technology.

Peak distribution




Peak loads are truly a scourge of network infrastructure and a nightmare of engineers. It so happened that our rhythm of life throughout the day dictates the uneven power consumption of data centers. Accordingly, so that we do not have problems loading the pages of our favorite social networks in the time interval from about 19:00 to 21:00 local time, companies that provide network services must have an excessive amount of computing power, which is the rest of the time. That, and not claimed. Yes, of course network engineers find ways to solve this problem: here both caching and intelligent systems for redistributing the load between server clusters. But let us imagine even for a minute, what would happen if we were able to eliminate (reduce to an absolute minimum) the network latency? This would drastically change the entire telecommunications infrastructure on Earth. For now we have the facts: 64 milliseconds are necessary for an electromagnetic wave to overcome the distance to the floor of the Earth equator. Considering the irregularities of the routes, the latency of the request response related to the functioning of the network infrastructure, and a bunch of other factors, at the moment it is quite problematic to organize a fairly efficient transfer of computing resources between different parts of the world, companies risk getting a bunch of angry customers that are not satisfied with the service offered to them.

Data Center Infrastructure Management (DCIM)


Another of the great beginnings, which is now in a "shaky" state. The DCIM philosophy includes the principle of creating uniform standards for the management and operation of data centers. Standardization of equipment, software, management approaches can make a huge contribution to the cheapening of the cost of both the production and maintenance of network infrastructure. The movement in this direction can be called the emergence of unified coefficients PUE, CUE, DCe. The recent consolidated statement of the major IT giants to pay more attention to open source software and to implement intercorporate projects on its basis is another step towards this goal. But due to various interests, both state and commercial, the process of unity is very slow. In the end, we pay for the costs of separation.



Modular data centers


It is worth noting obvious progress here. Since 2003, for the first time patented by Google, the technology of containers stuffed with server equipment that can be delivered to any point of the earth in 24 hours and set up an actually full-fledged data node there really delights. Of course, you should not forget about the necessary telecommunication channels, and about the site where these containers will be installed, and the staff plays a very important role. But still it is an obvious breakthrough. At the moment, the container approach to the creation of new data centers has come to taste of the main participants of the IT market. The mobility of equipment transfer, the speed of its deployment, unification, high build quality by highly skilled workers at mother factories — all of these qualities were appreciated both by civil organizations and the military. The number of existing examples of the use of technology is in the hundreds.

Data centers north of latitude 48


The construction of data centers in the climatic conditions of the north, like some other ideas disturbing the consciousness, did not find wide support for today. According to some experts, the most efficient construction of data centers is north of 48 latitude, and the most favorable conditions for low-cost, atmospheric cooling of server rooms will be there. If we analyze the existing geography of data centers, then only Europe can at least somehow be noted in this regard. Data centers in Sweden and Finland immediately come to mind, which only costs one Google data center in Luleå. But these are rather isolated cases, and global providers rather focus on the proximity of their infrastructure creations to direct consumers of their capacities, preferring the lowest possible latency of the network, not wanting to lose potential customers, winning on lower operating costs.

The main line - no risk


Looking at the main trends in the existing data centers and the design of new ones, the conclusion suggests itself. Such young and rapidly developing IT companies, revolutionaries of their kind, lead a rather conservative, if not to say outdated, model of market behavior, which hinders the widespread introduction of new, promising developments. Operators and designers are not ready to take any, even the smallest, risks for the sake of increasing the efficiency of the existing network infrastructure, even if a positive result in the direction of progress has already been shown with specific examples. Probably, if you go into all the details in detail, maybe this is some kind of higher logic. But the conclusion suggests a little pessimistic: as we did not wait for the flying cars predicted by the creators of the trilogy “Back to the Future” in 2015, we will not wait for them in another quarter of a century.

Source: https://habr.com/ru/post/240059/


All Articles