Our conference on DevOps tools and approaches is already tomorrow, which means that the time has come for the last interview! This time we asked a few questions to one of Google’s development team leaders about the work of the Kubernetes and Istio bundles, which are scheduled to be released early next year.
Craig will tell you why it is worth deploying in containers even for one machine, when to connect the orchestration system, what alternatives Kubernetes has and what will happen in the future. Details - under the cut.

')

Craig Box is an expert and head of one of the divisions on Google Cloud. His responsibilities include working with platforms, collecting user feedback and interacting with engineers. I started with a system administrator, then switched to development, deployment, DevOps, consulting and management.
Please tell us a little about your report. Is that bundle you’re talking about already used by someone in production or is it a concept? How lstio is a mature product?Craig Boxing: Istio is an open source service network that separates workflow from development. You can think of it as a network of services, not bytes or packets.
When people transfer their applications from a monolithic process to microservices, they implement a network that does not always work reliably, like many distributed endpoints. Some companies, in particular Netflix, solved the microservice network problems with the help of the library, which must be enabled by each microservice. Since microservices allow you to develop each component in any language, libraries need to be supported in several languages, and this soon became an additional problem.
Istio allows you to manage existing and new services in any language in a single way. It helps to manage, control and protect microservices in any language and deployed anywhere.
The administrator determines the rules (for example, “send 10% of backend traffic to my service” or “make sure that all traffic between A and B is carried out using TLS mutual encryption”). Istio implements a proxy server in front of each service that is programmed to enforce these rules. And even without defining anything, you immediately get a rich arsenal for monitoring traffic and ensuring distributed tracing between your endpoints after installing Istio on the Kubernetes cluster.
The proxy server used by Istio is called Envoy. It was written by a team from Lyft under the direction of Matt Klein. Lyft has been using Envoy for a long time in their projects and during that time they tested its work in systems of different scales.
The Istio community originally included Google, IBM, and Lyft, but after joining many other members, version 0.2 was released. We regard it as a “beta version” and plan limited industrial use to version 0.3 at the end of this year, and release 1.0 is scheduled for 2018, it will support even more environments.
Initially, Istio was designed with Kubernetes in mind, and Kubernetes is, of course, a mature, ready-to-use product. Research firm Redmonk claims that Kubernetes is used by 54% of Fortune 100 companies, accounting for 75% of the total number of companies using containers.
- Unlike most DevOps practices, the need to use orchestration systems is not always obvious, especially for relatively small teams and projects. Are they always needed for everyone, or do such systems appear and benefit only where there are microservices and a large number of machines?Craig Boxing: Even if you have only one service on one machine, there are still advantages in using containers: you can deploy your application with all its dependencies in a single atomic unit, be sure that it will not take more resources than is allowed, This at any time, you can easily undo all changes.
Even if you have only one container, Kubernetes is an excellent API for managing the life cycle of this container running on any machine; it handles abstractions below your application level, such as memory and network, and provides an API server that can be easily configured using your deployment tools.
You now have a portable application that you can move between environments. When your application is configured, you can go to scaling.
- By what signs can one understand that now it is already necessary if the project initially began without a similar orchestration system?Craig Boxing: A system similar to Kubernetes is managed through the API from start to finish. You can automate everything from creating clusters using a service such as Google Container Engine to deploying and updating applications on this cluster.
One of the most common reasons why people do this is to increase the rate of change. It goes something like this: by moving containers, orchestration and microservices to the cloud, you reduce the risk of a single deployment. This allows you to ensure continuous operation. Now you are confident in your process, which is activated using templates such as canary deployment (where at first 1% of traffic goes to a new service, then 5%, then 20%, etc.). You can also go back if something goes wrong.
Our customers told us that they can go from one deployment per month to dozens per day. This means that the new code runs faster, and ultimately leads to more flexible management of business processes.
- Are there any serious alternatives to Kubernetes? Or is his model so good and versatile that in principle they are not needed, and in the end there will be only one?Craig Boxing: Kubernetes is an evolution of the clustering model created by Google. We published documents about our work that influenced other systems in this space, for example, Mesos.
At about the same time that Kubernetes appeared, other similar systems appeared. Many of them soon switched to Kubernetes (for example, OpenShift, Deis Workflow, and recently Rancher).
One of the reasons Google decided to create Kubernetes and release it under an open source license is because everyone needed a standard API that can be used for all suppliers.
Kubernetes is called "Linux for the cloud." While Linux is in some way a de facto choice when you need Unix, there is still an ecosystem of other open source operating systems for different situations - not to mention a closed source world and Windows.
Similarly, we created a fantastic set of APIs in Kubernetes that allows you to manage all kinds of applications: from stateful web services to batch workloads for ML models with big data and GPUs. We also spent a lot of time building extensible functions in Kubernetes, which allow you to define the types of objects that we didn’t even think about. We see Kubernetes as the core of an ecosystem and something that you can expand.
- How difficult is it to maintain existing orchestration solutions? Is it possible to quickly enter, run them and forget? Or do you have to constant tuning, the collapse of the cluster and other problems?Craig Boxing: Each user will have their own reasons for using clusters. Some will share them for maximum utilization of power and may want them to work for a long time. Others will want to include them only when they need them, and give their own clusters to different business units. People working with a fixed number of machines have a different perspective on their use than people working in the cloud, when they can automatically add machines as needed.
We designed Kubernetes to work well in all situations, and our product, Google Container Engine, will allow you to deploy a cluster in less than three minutes.
- What problems can not or difficult to solve with existing systems orchestration? Where is their development going?Craig Boxing: Container orchestration systems are well suited for general dynamic workloads in scalable systems. If you want to run one commercial database on one machine and it uses all the resources of this machine, we would recommend that you do not change anything. If this server fails, the effort and cost of moving it may be greater than the effort and cost of repair. Given this, our recommendation is to simply connect to an external service within your cluster.
Similarly, when you have a managed service, such as Google BigQuery, you do not need to run the data store in your cluster.
Istio knows that not all applications will work on Kubernetes. In version 0.2, you can add services running on both virtual and real machines.
Oracle published scripts to put the Oracle Database in a container, and Microsoft took another step and posted the SQL Server images on the Docker Hub. If you want to move such workloads, it is more than possible!
- What about stateful services? Expect the appearance of a universal mechanism?Craig Boxing: MVP for Kubernetes hosted stateless web services, but we always thought about what we needed to do to be able to run stateful workloads.
We worked on the concept of StatefulSet, which gives each member an ordinal (numbered) identifier and a separate repository. In addition, you can create "operators" for applications that know what needs to happen in order to attach or remove an item from a cluster.
The Kubernetes community has developed a number of templates for regular open source applications such as Cassandra and MongoDB, and, as I said, many companies are moving traditional enterprise applications to containers. We hope that over time, all communities will use Kubernetes as a first-class supported deployment platform.
If you are
concerned with microservice management issues, we invite you to listen to the report of Craig Box
Managing your microservices with Kubernetes and Istio at
DevOops 2017 .
Also, you probably will be interested in other presentations at the conference, including: