The company presented the results of the Cisco Global Cloud Index report. We'll talk about how much money businesses spend on IT infrastructure, and how the cloud providers market will develop.
/ photo CommScope CC BYThe Cisco Global Cloud Index (GCI) estimates and predicts global IP traffic in the cloud and data centers. The report covers trends related to cloud computing and data center virtualization in the period from 2016 to 2021.
')
The report took into account the data of traffic within and between data centers, as well as data exchange between the data center and end users (Table 1 in the
methodology ).
How much is hyperscale
Recently, we
talked about the trend towards an increase in the number of hyper-scalable data centers. One of the main trends in the Cisco report also noted the development of hyperscale infrastructure.
According
to Cisco
forecasts , the number of hyper-scalable data centers will reach 628 by 2021 (now there are 338). By this time, the volume of traffic in such data centers will increase by 4 times and will constitute 55% of the total traffic within the data center (now it is 39%).
To calculate the number of hyper-scalable data centers, Cisco analyzed the revenue of cloud providers. The logic of the report was as follows: if the company providing IaaS / PaaS / SaaS services meets the selected criteria for revenue from these activities, it means that it has the infrastructure capable of supporting hyper-scalable data centers. According to the report, the minimum income requirements of the hyperscale provider are as follows:
- $ 1 billion for IaaS-, PaaS- and hosting providers (Rackspace, Google);
- 2 billion for SaaS-providers (Salesforce, ADP, Google);
- 4 billion for Internet providers, search engines and social networks (Facebook, Yahoo, Apple);
- 8 billion for e-commerce and payment processing services (Alibaba, eBay).
In the world so far there are 24 data centers that fall under these criteria. By the way, despite the impressive revenue, hyper-scalable data centers are extremely expensive projects. According
to Platformonomics
analysis , AWS, Microsoft, and Google together spent $ 100 billion building a hyper-scalable infrastructure.
What money is spent on
According to a
report from the analytic division of Spiceworks, 44% of companies plan to increase the IT budget in 2018, 43% will leave it unchanged, and 11% want to reduce IT costs [the survey was conducted among North American and European companies of various sizes - from microenterprises to large businesses ]. Depending on the size of the organization, the cost of cloud infrastructure will be about a third of the total IT budget.
There is another interesting point in this report - 10% of all software costs will be allocated to tasks related to virtualization (our
examples of virtual infrastructure).
This is consistent with the trends noted in the Cisco report. In a GCI report, the company
emphasized the pace of the transition of businesses around the world to cloud computing. It is assumed that the volume of workloads in the cloud will increase by 2.7 times from 2016 to 2021. At the same time, the volume of loads that falls on the traditional infrastructure will decrease - the average annual rate of their reduction will be 5%.
These trends are noted by other companies. According
to Gartner, by 2020 the market volume of IaaS and PaaS will be 72 and 21 billion dollars. In addition, Cisco has noted an increase in investment in IoT solutions: smart machines, cities, and related applications. By 2021, this IoT market is expected to reach $ 13.7 billion, compared with $ 5.8 billion in 2016, and will require additional cloud infrastructure resources.
/ photo by Robert CC BYIndividual approach
Large IT companies (Google, Facebook, Microsoft, etc.) are actively investing in equipment designed for individual needs. With the help of custom equipment, for example, an Open Compute Project (OCP)
project is being implemented. Facebook has launched it to create (as
they say ) “the most energy-efficient data center, providing unprecedented scalability at the lowest price.”
As a result, the engineering team really managed to design a unique data center: according to OCP, it is 38% more energy efficient and 24% cheaper to maintain than the data center that was previously used in Facebook. Now other companies are joining the project as well. Within the framework of OCP, they exchange ideas on how to create "custom data centers."
For example, based on OCP, Microsoft launches its solution. At the Zettastructure conference, the IT giant
introduced a new model for developing open source hardware - the Olympus project. Azure is already using it in a production environment with virtual machines from the Fv2 family. Microsoft argues that the project will allow customers to solve problems of financial modeling and in-depth training.
In addition, the development of custom chips, adapted to various loads. Such projects include Google’s
tensor processor (TPU) for deep learning and Microsoft’s user-programmable valve
arrays (FPGAs) for accelerating Azure systems. Intel is also constantly
developing processor chips for the individual needs of IT companies, such as
Oracle and
Facebook . At Intel, they can optionally implement additional interfaces, directly connect the electronics to the CPU cores, or create an individual set of processor instructions.
PS A couple of our articles on the topic of research and forecasts in IT:
PPS And a few posts from our corporate IaaS blog: