⬆️ ⬇️

How cloud technologies affect data centers





When it comes to changes in cloud computing and data centers, trends speak for themselves.



We in 1cloud are working on our own cloud service and we simply cannot ignore the changes. The volume of cloud usage continues to grow - according to Gartner, in 2016 cloud resources will account for a significant portion of the costs of all IT budgets.

')

Moreover, as noted by Cisco, by 2018, 78% of all workloads will be processed by cloud-based data centers. Also, by 2018, 59% of the total cloud load will be SaaS. Nowadays, it is impossible to imagine the work of a large bank or a telecommunications operator without processing a huge amount of data that needs to be stored somewhere, somehow processed and transmitted. All this leads to significant and rapid changes in the work of data centers.



The starting point of a new era can be considered virtualization, which has become a key factor in increasing the efficiency of equipment use. As it soon turned out, virtually everything can be virtualized: servers, data storage systems, telephone and mail services.



The first organizations that moved from understanding the issue to concrete actions were developing Internet companies, which at that time were forced to cope with the incredible growth rate of their customer base and ensure high infrastructure requirements.



Then came the first cloud service providers, whose products were available for rent virtualized servers and software. The third wave of virtualization came the corporate market, for which by that time there were already available and proven solutions.



In this regard, new technological approaches to the organization of IT infrastructure of data processing centers began to develop. The first thing worth mentioning is the emergence of management systems and the orchestration of infrastructure services that turn a set of virtualized servers into cloud services, such as SaaS, PaaS, IaaS. All this to a certain extent increased the utilization of servers and network connections in the data center.



However, the converged infrastructure (CI) model becomes the main driving force of the data center market, in other words - converged infrastructure. Such infrastructure should be called not so much the driving force as the likely form factor of the data centers of the future. At its core, the converged infrastructure is a version of the consolidated organization of various IT components into an optimized computing solution. In general, it includes: servers, network equipment, data storage systems and software necessary for management.



CI allows businesses to centralize IT resource management, consolidate systems, and reduce costs. Realization of all these goals is achieved by creating a pool of compute nodes and other resources that are distributed between applications. If earlier each service or application meant a separate computing resource, then the convergent model optimizes their use.



The catalyst for the development of converged infrastructure can be considered as technologies associated with flash memory.



The natural reaction of the developers of data center architectures to the emergence of a large number of nonvolatile technologies (the data reading speed is many times higher than that of disk drives) was the creation of a cluster of flash memory chips under the control of a chip emulating a disk controller. SSD replaces hard drives in critical locations - this process has been taking place in data centers for several years now.







On the server side, SSDs began to force out small, low-latency embedded SAS drives. As for the rest of the server, large pools of flash memory have become an alternative or interface for large disk arrays with high volume. There are also periodic offers to replace “cold” data storages - slow-moving high-capacity disks, where rarely used data is stored - with flash arrays.



It should be understood that the clouds have a serious impact on network solutions. Cloud technologies have increased the dependence of many companies on the work of data centers, because modern data centers must operate without interruption and service shutdowns, providing accessibility at a level close to 100%.



According to Statista.com, just a couple of years ago, in 2012, the annual volume of data center traffic was 1 exabyte, and in 2015 it approached 3 exabytes. Experts predict that in 2019 data center traffic will be 8.6 exabytes per year.







In this regard, data center architects need to be able to correctly predict the load even at the construction stage - this is not a trivial task. Initially, too much infrastructure at first will not be profitable in operation and will simply heat the air in the data center.



Architects need to understand that working with converged systems and multi-tenant platforms imposes specific requirements for building power and cooling, which must be easily scalable and cost-effective. It is the need to maintain a low temperature that makes global companies open data centers in the most unusual places.



The Swedish provider Bahnhof AB has built a data center in Stockholm in a former bunker at a depth of 30 meters. The room cut down in the rock after reconstruction remained as untouched as possible, although it was slightly expanded to accommodate the office there.






Argue with him can project Ice Cube, located near the polar station Amundsen-Scott, near the South Pole. This data center is considered the southernmost in the world and serves to process large amounts of data generated by various research instruments of the station.



Natural anomalies are generally very often used in order to achieve the maximum cooling effect during the work of data centers. For example, the center of Green Mountain, located in the Norwegian fjord, is cooled by air flows formed in the narrow openings between the mountains. This approach is quite environmentally friendly and can significantly save energy consumed by cooling plants.



In addition to underground data centers, there are other unusual solutions. American entrepreneurs decided to launch a network of floating data centers. Arnold Magcale and Daniel Kekai believe that such data centers will be better protected from natural disasters like earthquakes, moreover, in case of an unforeseen situation, they can be moved from place to place. For 6 years, a startup has been building a test data center: the project involved registering a patent for a cooling system using water through which the data center floats.



These unusual solutions, when the forces of nature help the work of the data centers, provide unprecedented flexibility and scalability, as it becomes possible to increase the system capacity without investing additional funds in the purchase or improvement of cooling systems.



In conclusion, I would like to note that all changes occurring with the infrastructure are interrelated, and very often a breakthrough in one of the areas, such as storage devices, entails a series of other events. Ultimately, all efforts are aimed at improving the competitiveness and financial performance of data centers that provide cloud services to their end users, to allow upgrading without stopping the service and implement optimal solutions.



PS Additional materials about our work:



Source: https://habr.com/ru/post/279547/



All Articles