
A week ago, the developers of Rancher
presented a preliminary release of their future major version - 2.0, - simultaneously announcing the transition to Kubernetes as a single basis for container orchestration. What prompted the developers to go this way?
Background: Cattle
Rancher 1.0 was released in March last year, and today in the world, according to the company Rancher Labs, there are more than 10 thousand of its existing installations and more than 100 commercial users. The first version of Rancher included
Cattle , which, being a “high-level component written in Java”, was called the “basic orchestration engine” and “the main loop of the entire system”. In essence, Cattle was not even a framework for orchestrating containers, but a special layer managing metadata (+ resources, relationships, states, etc.) and transmitting all real tasks for execution to external systems.
')
Cattle supported many solutions available on the market: Docker Swarm, Apache Mesos, Kubernetes, because, as the authors themselves write, “Rancher users liked the idea of ​​implementing a management platform that gives them the freedom to choose a framework to orchestrate containers”.
However, the growing popularity of Kubernetes has contributed to the emergence of more and more new requirements imposed by Rancher users for the capabilities and convenience of interaction with this system. At the same time, Rancher Labs management’s view on current Kubernetes problems and its prospects also changed.
Kubernetes: yesterday, today, tomorrow
In 2015, when Ranber Labs started working with Kubernetes, i.e. adding its support to its platform, the main problem in K8s, according to the developers, was the installation and initial configuration of the system. Therefore, the pride of the Kubernetes primary support announcement in the Rancher v0.63 release (March 17, 2016) sounded like this: “Now you can start the Kubernetes environment in one click and in 5-10 minutes get access to a fully deployed cluster”.
Time passed, and the next major obstacle to the use of Kubernetes, in the opinion of the head of Rancher Labs, was the continued operation and updating of clusters. By the end of last year, this problem ceased to be serious - thanks to the active development of utilities such as
kops and the emerging trend of offering Kubernetes as a service (Kubernetes-as-a-Service). This confirmed the
thoughts of Joe Beda, founder of the Kubernetes project at Google, and now the CEO of Heptio, that Kubernetes will be the next platform for launching modern applications, or rather “Kubernetes will become a key part of this platform, but the story on it definitely won't end. ”
But more recent events, such as the
support of Kubernetes in DC / OS from Mesosphere and the official connection to the Kubernetes ecosystem of IT giants like
Microsoft and
Oracle, can serve as another and more objective confirmation. The latest addition to CNCF alone prompted the online publication ARCHITECHT
to state that "now, from Oracle on the ship, Kubernetes should become the de facto standard for container orchestration."
At Rancher Labs, they are “putting” on the fact that the next step on the path of Kubernetes will be to turn it into a “universal standard for infrastructure”, which will happen when the notorious Kubernetes-as-a-Service becomes a typical service for most infrastructure providers:
“The DevOps team will no longer need to manage the Kubernetes clusters independently. The only difficulty that remains is to manage and use Kubernetes clusters accessible from everywhere. ”
The engineers of the company took up the thoughts about this for a major update of Rancher - version 2.0.
Rancher 2.0
As in Rancher 1.0, in version 2.0, the platform consists of a server (to manage the entire installation) and agents (installed on all accepted hosts). The main difference of Rancher 2.0 from 1.0 is that Kubernetes is now built into the server. This means that when you start the Docker-image
rancher/server
Kubernetes cluster starts, and each new host you add becomes a part of it (it has a
kubelet ). In addition, you can create additional clusters in addition to this wizard, as well as import existing clusters using kops or from external providers like Google (GKE). The Rancher agent runs on all embedded and imported clusters.
Above all these many Kubernetes clusters, Rancher implements common layers for centralized management (authentication and RBAC, provisioning, updates, monitoring, backups) and interaction with them (web user interface, API, CLI):

In more detail (up to the host level), the Rancher 2.0 architecture looks like this:

Among other features of this solution, I will highlight the following:
- The Netes-agent shown in the diagram is a component that is responsible for implementing the definitions of Rancher containers in Kubernetes. This agent connects to all Kubernetes clusters, making them manageable.
- Kubernetes wizard built into Rancher 2.0 is its own distribution kit including API Server , Scheduler , Controller Manager . These three components are combined into a single process running in a single container. By default, the database backend of all clusters is placed in one database (“this greatly simplifies the management of embedded clusters in Rancher”).
- There is only one component left in Rancher written in Java (all the others are written in the Go language) - this is the Core controller . It implements Rancher services in Kubernetes: services, load balancers, DNS records. Along with Websocket proxy and Compose executor, it is part of the Rancher Controller , launched on the Rancher server (see the diagram above).
- Tom Rancher is PersistentVolumeClaims (PVC). There are three types of volumes that differ in their life cycle: Container scoped (created when creating a container, deleted when removing a container), Service scoped (similarly for services), Environment scoped (exist while there is an environment).
- Rancher IPSec or VXLAN overlay solutions (existed before), as well as third-party CNI plug - ins for embedded clusters are supported for the network. Two new network modes are also offered for launching containers in isolated space: “layer 3 routed” (per host subnet) and “layer 2 flat”.
- Rancher 2.0 is positioned as a solution that works with "standard SQL-DBMS, such as MySQL." To do this, you are offered both to use ready-made Database-as-a-Service solutions (for example, RDS from AWS), and to customize them yourself using Galera Cluster for MySQL or MySQL NDB Cluster.
- Planned scalability limits: 1000 clusters, 1000 hosts per cluster (and 10 thousand hosts on all clusters), 30 thousand containers per cluster (and 300 thousand containers on all clusters).
- Currently supported Docker versions are: 1.12.6, 1.13.1, 17.03-ce, 17.06-ce (which is close to the compatibility list of Kubernetes 1.8 released recently) .
Details about the technical device Rancher 2.0 are available in
this PDF document .
In addition, of course, the developers have taken care of improvements in the interface for the end user, making Rancher UI (and the application catalog) even easier and leaving advanced users access to kubectl and Kubernetes dashboard. A 15-minute
Rancher 2.0 presentation has been posted on YouTube, demonstrating both internal and external changes.
The current version of Rancher 2.0, positioned as a tech preview and released a week ago, is
v2.0.0-alpha6 . The final release is scheduled for the 4th quarter of 2017.
CoreOS and fleet
I will finish the story about Rancher 2.0 with a completely different example - from CoreOS. The fact is that with their
fleet — a fairly well-known Open Source product in the DevOps world — a similar story happened when viewed in the context of the ubiquitous adaptation of Kubernetes. Characterized as a “distributed init system”, fleet tied systemd and etcd to use systemd on a cluster scale (instead of a single machine). It was created in order to become "the basis for a higher level orchestration".
At the end of last year, the fleet project
changed its status from “tested in production” to “no longer being developed or maintained”, and at the beginning of this year they clarified that the fleet image would cease to be part of Container Linux from February 1, 2018. Why?
Instead, CoreOS recommends installing Kubernetes for all cluster needs.
A
note on the CoreOS blog (dated February 2017) on this topic clarifies that Kubernetes "is becoming the de facto standard for orchestrating open source containers." The authors also claim: "For a number of technical and market reasons, Kubernetes is the best tool for managing container infrastructure and its automation on a large scale."
PS
Read also in our blog: