📜 ⬆️ ⬇️

Software-defined data center: why is it necessary in the practice of the sysadmin


The concept of software-defined data centers appeared a long time ago. However, in practice, little was realized and worked, except for IaaS providers. In fact, most often was the usual virtualization. Now you can step further on the VMware stack, and you can implement everything on Openstack - then you have to think with your head and weigh a lot of factors.

Over the past year, we have seen a very significant technological leap in terms of applying SDDC in normal sysadmin practice. Now there are proven technologies and network virtualization, and storage virtualization in the form of normal tools. And from this you can get real business benefits.

Why do you need it? Very simple: starting with the automation of the routine, decoupling the dependence on physical iron know exactly the consumption of each resource; know to the penny, and ending with where and how the money goes in the IT budget. The last two reasons lie a little outside the usual admin's goals, but are very useful for CIOs or sysadmins of medium and large businesses, who are counting on complete understanding with the commercial department. And the prize, which is already there.
')


Let's start with the story


In the 60s, IBM took the right path, deciding that mainframes will steer. There is virtualization as such. In the 80s, the emergence of the IBM PC standard created a situation where it was possible to buy a dozen cars, and not invest in infrastructure thorough. Already in 1990 began a massive transition from a centralized to a decentralized architecture. Only at the end of the 90s more or less successful virtualization based on this very x86-architecture appeared. In the 2000s, a serious commercial introduction of virtualization by large companies began. After about five years, new VDI, application virtualization and SDN technologies began to appear. In 2010, serious support for the SDN approach in data center equipment began. Now we see that SDN and SDS — software-defined networks, and storages — are becoming a necessity and, together with the virtualization of computing, add up to the concept of SDDC, a software-defined data center.

Today, the main thing is the consolidation of computing resources (when all hardware is viewed through the virtualization platform as a common capacity for computing, storing and transmitting data). Then in priorities - disaster-proof solutions from conventional backup data centers to "stretched" data centers. Our experience in this story is as follows: large retail needs backup solutions in most cases, state-owned enterprises need to store a lot of data, so virtual storage is a priority for large volumes, banks need everything that optimizes their work and allows them to take costs into account.

Now it is necessary to say that at the global level a new round of development of the SDN begins. Some foreign operators began to look at SDN and NFV as a real replacement for the current hardware solutions, they grew their teams for SDDC inside, but our operators have this story ahead.

I already wrote about SDS in the last post, I’m writing some conceptual words about SDN here. I think a little later we will have a more detailed article about this part.

The essence of the SDN approach


The main feature of the stack is an abstraction from iron, and, as a result, an abstraction of all service policies, features and features, avoiding compatibility and scaling problems. Horizontal scaling is as simple as ever.

Below is a familiar, time-tested approach to building a network.


And in the next picture, the SDN approach, in fact, we simply take and move the control module or Data plane from all network devices to one place, naturally reserving it. Thus, turning the entire network infrastructure into a single “big switch” which interface modules can be physical switches from different vendors, physical servers with different hypervisors and virtual switches on them, etc.



Here, in general, the whole approach.

Further - why SDDC is still needed. Perhaps, all this is voiced from different sides, and you heard it, then consider that we simply agree and have heard it including from our customers.

Routine automation


All decide it now in different ways. Someone creates a control system, someone writes scripts. But I think any admin would like it so that it does not need to create a virtual machine 100 times a day, mark up the moon, or install an OS if it is suddenly needed.

Suppose the IT department is working on requests from service consumers. Today you need to create one server, and it is simple and fast. Tomorrow is already 20. At the same time, everyone needs to register addresses, create networks, connect, route, and so on. And also select storage. It's good that now there are virtuals, and often you just need to click the buttons. But often this is not always rational. We, for example, know this from our testing laboratory, in which usually about 300-500 virtual machines are running simultaneously. And for consumers, slow initialization often looks like downtime and service failure, there are constant complaints.

According to VMware, automation is about 40% of the reasons for buying their proprietary stack for SDDC.

Resource management


When at any time you can build a report about what works for you, who is the owner, how intensively one or another resource is used, and so on. This and the inventory of equipment and software, access control to resources. And imagine how many dead VMs you may already have that are scattered in different parts of the infrastructure? How to find them and understand whether they need?

So let's say the director comes to you and asks: “Why did you spend 3 million rubles this year on hardware and software?” What are you doing in this situation? How to show management how many resources there are, who use them, by department or system? And how much is free? How to answer questions like: “And how much does it cost us to open a new branch on IT?”, “How can we compare the outsourcing of this service and its use inside?”

Each year, the IT department agrees on a development budget. You suppose, you understand, that this $ 1 million storage system is very necessary without it in any way. How to show it to business? It is necessary to operate with consumption, and it needs not just to be considered, but also to be detailed to specific departments and services.

Cost management


This is probably the most important and interesting for business users, which can now give SDDC. It will allow management to know how much and what resources are used to implement this or that IT function. Whether it be banal mail, CRM, or ERP system ... In addition, you can integrate the cloud directly with these systems, so that, for example, the cost of storage is distributed as it is used between departments. What exactly are the money going to ERP, or CRM, in what shares, and so on? And yet this understanding is important when you need to buy some kind of infrastructure piece, and it clearly does not affect anyone.

The main task: to turn the IT infrastructure into a business unit that provides services to the rest of the company. Imagine that you provide resources and you are paid “money” for them, and as a result you automatically spend this “income” on IT modernization and support, causing less questions from management. Because at any moment there is an opportunity to show - and who actually needs this IT and how much. It is this model that is most understandable to the commercial departments - in fact, they work with all departments, evaluating them in the profit generation model.

Evaluate how difficult it is to find out how much resources (and therefore money) were spent on testing and implementing a specific function? How much will it cost to maintain each of the business systems, taking into account all the resources used?

Full cloud ecosystem


How, I think, is already clear, the cloud is not only and not so much virtualization of computing (which is just the simplest in IaaS), but also a software-defined network and software-defined storage. You can read about SDS here in a large educational program (https://habrahabr.ru/company/croc/blog/272795/). In short, we use all types of online storage, including servers with disks, “classic” centralized storage systems and other types of libraries, and we combine all this into one virtual storage, which can balance the data correctly and ensure more correct storage of “cold” and “Hot” databases, and squeeze everything out of hardware, up to and including the use of RAM in servers as the storage cache. This is the essence, this is the strategy: soon enough, only such ultra-homogeneous data centers from one vendor will do without such solutions.

With SDN is a bit more complicated. Here, the entire network as such is managed programmatically. Any node can be redefined at any time.

Open source VS proprietary approach


There are 2 ways to build a cloud:
  1. Proprietary. That is, suppose that significant investments have already been made in the platform on VMware. And you continue to develop the cloud using what the vendor offers or recommends.
  2. Hybrid. You can fix the current infrastructure. And on top of it create a cloud solution based on Openstack. Most often, this path begins with the fact that you already have a good virtualizer (usually, this is VMware or KVM), around which the open source software stack is built. As a rule, big players go to this.

    Everyone understands that VMware has now collected a huge amount of technologies under its umbrella and is confidently continuing to develop them. But open solutions are not far behind, as practice shows. What it costs is that VMware itself has released and is developing its own version of Openstack, which has a set of functional capabilities for working in the VMware environment. Just like a cloud management system.

    That is, the second way is actually legalized.

    At the same time, the use of the vmvarny version is not required. Especially if you plan to abandon VMware as a whole in the future, significantly reducing costs.
    However, then a couple of features that can play a role when choosing.


Proprietary vendors, on the example of VMware


In VMware, products were collected under a single brand, on the basis of which you can assemble a full-featured software-defined data center, that is, SDDC. Plus, around VMware, there is a whole ecosystem of technology partners that extend the capabilities of the software.





Openstack


OpenStek is an open source. Everybody knows. For a private cloud, this can be extremely useful. Just because you hardly see 2 identical cloud implementations. Just because organizations are all different, their processes inside and IT culture.

As a result, there is the possibility of customization at a very high level. It is important to understand that customization is achieved by rewriting or writing program code, respectively.

Components of Openstack, as you remember, are listed here (http://openstack.ru/about/components/). The truth is not much different from VMware, when viewed from a bird's eye view?



The components are ideologically the same. Of course the level of maturity of each of them may differ. But now even telecom operators say we will use it. And all the specific we pick up later.
And if, at first glance, it probably seems once the open course, it means that it will be difficult to fasten the usual corporate enterprase. And that's not the case.

There are solutions for SDN, for example, OpenDaylight or OpenContrail, which, by the way, is developed with the support of Juniper, one of the main players in the market of "traditional" network equipment.

There are SDS, for example, Datacore has the ability to work with Openstack, combining and broadcasting all existing storage systems in a way that is understandable for Openstack.

Suppose if you need a good balancer, you can safely take the F5. He has intergation with a neutron. There is also an interesting startup Avi networks. They do SDN, NFV, and balancers, also for Openstack. And in general, integration with Openstack has become a good tone for enterprise software.

Probably a reader at this point might think, but we have an open-ended resource. Why are we again about paid software. And this is because it is best to dismiss Openstack as a cloud platform, the unifying link that automates and binds certain functions, and not only provides them themselves. Since it is automation - its main task.

Summary



There is an old programmer principle: “work - do not touch”. Perhaps it is he who prevents the IT professionals of many enterprises from starting open source experiments. But now the course jumps play against the IT budgets of companies, despite the fact that the value of Western software products in foreign currency does not become higher. So even the task of preserving the infrastructure at the existing level requires more financial resources than before. Simply put, it has finally become so proprietary to be so proprietary that it has even reached commercial departments.

Therefore, right now in the infrastructures of our customers the share of open source solutions is growing significantly. And I also think so about the hybrid path of development of the cloud, many will think.

Therefore, the scheme of action is approximately as follows:



  1. Can the company benefit from SDDC technology and which ones? To do this, you need to understand, for example, how much you need a personal or hybrid cloud. In the forehead, the assessment will not work, and the paradox is that neither the IT department nor the commercial department will do this inside of itself - you need to cooperate and understand what is important to you. For example, would a 30-40% acceleration of the development cycle be essential for a bank? For sure! And this, in general, can be achieved with the right test environments and the removal of problems with the allocation of resources. For an industry with a high IT share in development, this is even more important. Accelerating the delivery of infrastructure services to work sounds cool, but, again, it can be difficult to assess directly in money at the level of the IT department.
  2. Decide on architecture. If you are already using some piece of the proprietary stack, this is not a reason to build everything on it, but, I note, repeatedly mentioned VMware, for example, has a bunch of third-party commercial "pribluds" that turn the entire system into an "closed space", but This makes it possible to do everything from a single panel. For money. On the other hand, if you have the strength to deal with Openstack - prepare files, but it will cost, perhaps, much cheaper. As a rule, the proprietary path is a variant of a financial organization because of the guarantees of vendors, or a variant of a company where the turnover rate is high in the IT department.
  3. Count in miniature. Alas, the implementation of SDDC components is very difficult to do “on the knee for the N project”, as it affects the whole architecture.
  4. See these documents here. This is a foreign experience, and quite successful and well applicable in the Russian context. The range of implementations - from universities to large financial organizations, well-known application software, and so on.

    In general, see:

  5. Of course, changing the architecture is not a matter of the fastest. This path is long, but the results can greatly exceed expectations. This type of infrastructure saves a lot of money and resources, but most importantly, you can start rebuilding parts of the infrastructure (for example, the least critical, but still requiring resources) right now. In this case, start collecting data from colleagues, Western friends and professional forums.
  6. If you are already ready, and you need to figure out something in the “black” fork, I can help at the level of advice and first approximation - my mail albelyaev@croc.ru. By the way, approximately in May we will have the first admin training on Datacore, however, there are no more vacancies. But if anyone is interested, write in a personal or e-mail, I will inform you about the seminar, as soon as the group is typed, we will plan a new one. And on April 14 we will hold an extended seminar with online broadcasting on the topic of infrastructure optimization using network / storage virtualization, VDI, etc. The announcement will be here soon.

Source: https://habr.com/ru/post/278929/


All Articles