📜 ⬆️ ⬇️

OpenShift 4.0 - getting ready for the hyper-jump

This is the first of a series of our publications on improvements and additions in the upcoming upgrade of the Red Hat OpenShift platform to version 4.0, which will help you prepare for the transition to the new version.



What tools can you have at your disposal to create better software products, and how can they improve security and make development easier and more reliable?

From the very moment that the representatives of the newly formed Kubernetes community first gathered in the fall of 2014 at the Google office in Seattle, it was already possible to say that the Kubernetes project was destined to fundamentally change the current approaches to the development and implementation of software. At the same time, public cloud service providers continued to actively invest in the development of infrastructure and services, which greatly facilitated and simplified work with IT and the creation of software, and made them incredibly accessible, which few could have imagined as early as the beginning of the decade.
')
Of course, the announcement of each new cloud service was accompanied by numerous discussions of experts on Twitter, and disputes were fought on a variety of topics - including the end of the open source era, the decline of IT on the client side (on-premises IT), the inevitability of new software monopoly in the cloud, and how the new paradigm X will replace all the other paradigms.

However, the reality is that nothing disappears anywhere, and today we can observe an exponential growth of final products and methods of their development, which is associated with the constant appearance of new software in our lives. And despite the fact that everything will change around, at the same time, in its essence, everything will remain unchanged. Software developers will continue to write code with errors, service engineers and reliability specialists will continue to walk with pagers and receive automatic alerts in Slack, managers will still operate with OpEx and CapEx concepts, and every time a failure occurs the developer will be sad to sigh with the words: "I told you ..."

With the increasing complexity of projects, new risks appear, and today people’s lives are so dependent on software that developers simply have to try to do their job better.

Kubernetes is one such tool. We are working to integrate it with other tools and services into a single platform within Red Hat OpenShift, which would make the software more reliable, easy to manage and safe for users.

With that said, the question arises: how to make working with Kubernetes easier and more convenient?

The answer may seem surprisingly simple:


The next release of OpenShift should take into account both the experience of the creators and the experience of other developers who are implementing software on a large scale in the largest companies in the world. In addition, it is necessary to take into account all the accumulated experience of open ecosystems that underlie the modern world today. At the same time, it is necessary to abandon the former mentality of the amateur developer and move on to a new philosophy of the automated future. It should be a “bridge” between the old and new ways of deploying software and making full use of all the available infrastructure — it doesn't matter whether it is serviced by the largest cloud provider or running on tiny peripheral systems.

How to achieve this result?


In Red Hat, it is customary to do boring and ungrateful work for a long time in order to preserve the formed community and prevent the closure of projects in which the company participates. The open-source community consists of a huge number of talented developers who create the most unusual things - entertaining, teaching, opening up new opportunities and simply beautiful, but, of course, no one expects all participants to move in one direction or pursue common goals. Using this energy, re-directing it in the right direction, is sometimes necessary for the development of areas that would be useful to our users, but at the same time we must follow the development of our communities and learn from them.

In early 2018, Red Hat acquired the CoreOS project, which had similar views on the future - a safer and more reliable, based on open-source principles. The company worked on the further development of these ideas and their implementation, implementing our philosophy in life - trying to ensure the safe operation of all software. All this work is built on Kubernetes, Linux, public clouds, private clouds and thousands of other projects that underlie our modern digital ecosystem.

The new release of OpenShift 4 will be understandable, automated and more natural.

The OpenShift platform will work with the best and most reliable Linux operating systems, with bare-metal hardware support, convenient virtualization, automatic infrastructure programming and, of course, containers (which are essentially just Linux images).

The platform must be secure from the outset, but at the same time provide the possibility of convenient iterations for developers — that is, have sufficient flexibility and reliability, while still allowing administrators to audit and provide ease of management.

It should allow to run the software “as a service”, and not lead to an uncontrollable growth of infrastructure for operators.

It will allow developers to focus on creating real products for users and customers. You do not have to wade through the jungle of hardware and software settings, and all accidental complications will be a thing of the past.

OpenShift 4: NoOps Non-Maintenance Platform


This publication described the tasks that helped form the company's vision regarding OpenShift 4. The team has the task to simplify the daily tasks of using and maintaining the software to the maximum extent, to make these processes easy and unconstrained - for both implementation specialists and developers. But how can you get closer to this goal? How to create a platform to run software that requires minimal intervention? What does NoOps mean in this context?

If you try to abstract, for developers, the notion of “serverless” or “NoOps” means tools and services that allow you to hide the “operational” component or minimize this burden for the developer.


The task, as before, is to speed up iterations in the development of software, to provide the opportunity to create better products, and so that the developer can not worry about the systems that run his software. An experienced developer is well aware that if you focus on users, the picture can quickly change, so you should not invest too much effort in writing software, if you do not have absolute confidence in its need.

For professionals involved in maintenance and operation, the word “NoOps” can sound somewhat frightening. But during the communication with the operating engineers, it becomes obvious that the patterns and methods used by them aimed at ensuring reliability of reliability (Site Reliability Engineering, SRE) overlap with the patterns described above:


SRE specialists know that something can go wrong, and they will have to monitor and fix the problem - so they automate routine work and are pre-determined with tolerable error budgets to be ready for prioritizing and making decisions when a problem occurs. .

Kubernetes in OpenShift is a platform designed to solve two main tasks: instead of forcing you to deal with virtual machines or API load balancers, work is being done with higher-order abstractions — with deployment processes and services. Instead of installing software agents, you can run containers, and instead of writing your own monitoring stack, use the tools already available in the platform. Thus, the secret ingredient of OpenShift 4 does not really represent any mystery - you just need to base on the principles of SRE and serverless concepts, and bring them to a logical conclusion, to help developers and operating engineers:


But what is the difference between the OpenShift 4 platform and its predecessors and the “standard” approach to solving such problems? How is scaling achieved for implementation and operation teams? Due to the fact that the king in this situation is a cluster. So,


Want to see the capabilities of the platform in action?


A preliminary version of OpenShift 4 has become available for developers. With an easy-to-use installer, you can run a cluster on AWS over Red Had CoreOS. To use the pre-release, you only need an AWS account to provide the infrastructure and a set of accounts to access the pre-release images.

  1. To get started, go to try.openshift.com and click “Get Started”.
  2. Log in to your Red Hat account (or create a new one) and follow the instructions to set up your first cluster.

After a successful installation, check out our OpenShift Training training materials to get a more detailed understanding of the systems and concepts that make the OpenShift 4 platform such a simple and convenient tool to run Kubernetes.

And at the DevOpsForum 2019 conference, one of the OpenShift developers, Vadim Rutkovsky, will hold a master class “It’s necessary to change the whole system: repair broken k8s clusters with certified locksmiths” - will break ten clusters and show how to repair them.

Entrance to the conference is paid, but for the promotional code #RedHat - 37% discount.

We are waiting for you on April 20 for a master class in Hall # 2 at 5:15 pm, and at our stand - all day. Useful information on products, meetings with experts, T-shirts, hats, Red Hat stickers - everything is as usual! :-)

Source: https://habr.com/ru/post/445558/


All Articles