📜 ⬆️ ⬇️

Practice with dapp. Part 2. Deploying Docker images in Kubernetes using Helm

dapp is our Open Source utility that helps DevOps engineers follow the CI / CD processes (read more about it in the announcement ) . The Russian-language documentation for it gives an example of assembling a simple application, and this process (with a demonstration of the main features of dapp) was presented in more detail in the first part of the article . Now, based on the same simple application, I'll show you how dapp works with the Kubernetes cluster.



As in the first article, all additions to the symfony-demo application code are in our repository . But Vagrantfile this time will not work: Docker and dapp will have to be installed locally.
')
To go through the steps, you need to start with the dapp_build branch, where the Dappfile was added in the first article.

 $ git clone https://github.com/flant/symfony-demo.git $ cd symfony-demo $ git checkout dapp_build $ git checkout -b kube_test $ dapp dimg build 

Starting a cluster using Minikube


Now you need to create a cluster Kubernetes, where dapp will launch the application. For this we will use Minikube as the recommended way to start the cluster on the local machine.

Installation is simple and consists in downloading Minikube and the kubectl utility. Instructions are available on the links:


Note : Read also our translation of the article “ Getting started in Kubernetes using Minikube ”.

After installation, you need to run minikube setup . Minikube will download the ISO and launch a virtual machine from it in VirtualBox.

After a successful start, you can see what is in the cluster:

 $ kubectl get all NAME READY STATUS RESTARTS AGE po/hello-minikube-938614450-zx7m6 1/1 Running 3 71d NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/hello-minikube 10.0.0.102 <nodes> 8080:31429/TCP 71d svc/kubernetes 10.0.0.1 <none> 443/TCP 71d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/hello-minikube 1 1 1 1 71d NAME DESIRED CURRENT READY AGE rs/hello-minikube-938614450 1 1 1 71d 

The command will show all resources in the namespace by default ( default ). A list of all namespaces can be viewed through kubectl get ns .

Preparation, step number 1: registry for images


So, we launched Kubernetes cluster in a virtual machine. What else is needed to run the application?

First, for this you need to upload an image to where the cluster can get it. You can use the general Docker Registry or install your Registry in a cluster (we do this for production clusters). For local development, the second option is also better, and implementing it with dapp is quite simple - for this there is a special command:

 $ dapp kube minikube setup Restart minikube [RUNNING] minikube: Running localkube: Running kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100 Starting local Kubernetes v1.6.4 cluster... Starting VM... Moving files into cluster... Setting up certs... Starting cluster components... Connecting to cluster... Setting up kubeconfig... Kubectl is now configured to use the cluster. Restart minikube [OK] 34.18 sec Wait till minikube ready [RUNNING] Wait till minikube ready [OK] 0.05 sec Run registry [RUNNING] Run registry [OK] 61.44 sec Run registry forwarder daemon [RUNNING] Run registry forwarder daemon [OK] 5.01 sec 

After its execution, the following redirection appears in the list of system processes:

 username 13317 0.5 0.4 57184 36076 pts/17 Sl 14:03 0:00 kubectl port-forward --namespace kube-system kube-registry-6nw7m 5000:5000 

... and in the namespace under the name kube-system , a Registry is created and a proxy to it:

 $ kubectl get -n kube-system all NAME READY STATUS RESTARTS AGE po/kube-addon-manager-minikube 1/1 Running 2 22m po/kube-dns-1301475494-7kk6l 3/3 Running 3 22m po/kube-dns-v20-g7hr9 3/3 Running 9 71d po/kube-registry-6nw7m 1/1 Running 0 3m po/kube-registry-proxy 1/1 Running 0 3m po/kubernetes-dashboard-9zsv8 1/1 Running 3 71d po/kubernetes-dashboard-f4tp1 1/1 Running 1 22m NAME DESIRED CURRENT READY AGE rc/kube-dns-v20 1 1 1 71d rc/kube-registry 1 1 1 3m rc/kubernetes-dashboard 1 1 1 71d NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 71d svc/kube-registry 10.0.0.142 <none> 5000/TCP 3m svc/kubernetes-dashboard 10.0.0.249 <nodes> 80:30000/TCP 71d NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/kube-dns 1 1 1 1 22m NAME DESIRED CURRENT READY AGE rs/kube-dns-1301475494 1 1 1 22m 

We test the launched Registry, putting our image in it with the command dapp dimg push --tag-branch :minikube . Used here :minikube is an alias embedded in dapp specifically for Minikube that will be converted to localhost:5000/symfony-demo .

 $ dapp dimg push --tag-branch :minikube symfony-demo-app localhost:5000/symfony-demo:symfony-demo-app-kube_test [PUSHING] pushing image `localhost:5000/symfony-demo:symfony-demo-app-kube_test` [RUNNING] The push refers to a repository [localhost:5000/symfony-demo] 0ea2a2940c53: Preparing ffe608c425e1: Preparing 5c2cc2aa6663: Preparing edbfc49bce31: Preparing 308e5999b491: Preparing 9688e9ffce23: Preparing 0566c118947e: Preparing 6f9cf951edf5: Preparing 182d2a55830d: Preparing 5a4c2c9a24fc: Preparing cb11ba605400: Preparing 6f9cf951edf5: Waiting 182d2a55830d: Waiting 5a4c2c9a24fc: Waiting cb11ba605400: Waiting 9688e9ffce23: Waiting 0566c118947e: Waiting 0ea2a2940c53: Layer already exists 308e5999b491: Layer already exists ffe608c425e1: Layer already exists edbfc49bce31: Layer already exists 5c2cc2aa6663: Layer already exists 0566c118947e: Layer already exists 9688e9ffce23: Layer already exists 182d2a55830d: Layer already exists 6f9cf951edf5: Layer already exists cb11ba605400: Layer already exists 5a4c2c9a24fc: Layer already exists symfony-demo-app-kube_test: digest: sha256:5c55386de5f40895e0d8292b041d4dbb09373b78d398695a1f3e9bf23ee7e123 size: 2616 pushing image `localhost:5000/symfony-demo:symfony-demo-app-kube_test` [OK] 0.54 sec 

It can be seen that the image tag in the Registry is composed of the name dimg and the name of the branch (through a hyphen).

Preparation, Step # 2: Resource Configuration (Helm)


The second part required to run the application in a cluster is the configuration of resources. The standard cluster management utility Kubernetes is kubectl . If you need to create a new resource ( Deployment , Service , Ingress , etc.) or change the properties of an existing resource, then the YAML file with the configuration is transferred to the utility.

However, dapp does not directly use kubectl , but works with the so-called batch manager, Helm , which provides the template YAML files and controls the rollout to the cluster itself.

Therefore, our next step is to install Helm. Official instructions can be found in the project documentation .

After installation, you must run helm init . What she does? Helm consists of the client part that we installed and the server part. The helm init command installs the server part ( tiller ). Let's see what appeared in the namespace kube-system :

 $ kubectl get -n kube-system all NAME READY STATUS RESTARTS AGE po/kube-addon-manager-minikube 1/1 Running 2 1h po/kube-dns-1301475494-7kk6l 3/3 Running 3 1h po/kube-dns-v20-g7hr9 3/3 Running 9 71d po/kube-registry-6nw7m 1/1 Running 0 1h po/kube-registry-proxy 1/1 Running 0 1h po/kubernetes-dashboard-9zsv8 1/1 Running 3 71d po/kubernetes-dashboard-f4tp1 1/1 Running 1 1h !!! po/tiller-deploy-3703072393-bdqn8 1/1 Running 0 3m NAME DESIRED CURRENT READY AGE rc/kube-dns-v20 1 1 1 71d rc/kube-registry 1 1 1 1h rc/kubernetes-dashboard 1 1 1 71d NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 71d svc/kube-registry 10.0.0.142 <none> 5000/TCP 1h svc/kubernetes-dashboard 10.0.0.249 <nodes> 80:30000/TCP 71d !!! svc/tiller-deploy 10.0.0.196 <none> 44134/TCP 3m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/kube-dns 1 1 1 1 1h !!! deploy/tiller-deploy 1 1 1 1 3m NAME DESIRED CURRENT READY AGE rs/kube-dns-1301475494 1 1 1 1h !!! rs/tiller-deploy-3703072393 1 1 1 3m 

(Hereinafter, “!!!” is manually highlighted the lines that are worth paying attention to.)

That is: Deployment appeared under the name tiller-deploy with one ReplicaSet and one sub ( Pod ). For Deployment, a service of the same name ( tiller-deploy ) was made, which allows access via port 44134.

Preparation, step number 3: IngressController


The third part is the configuration itself for the application. At this stage, you need to understand what is required to put in a cluster for the application to work.

The following scheme is proposed:


IngressController is an additional component of the Kubernetes cluster for organizing load-balanced web applications. In essence, this is nginx, the configuration of which depends on the Ingress resources added to the cluster. The component must be installed separately, and for minikube there is an addon. You can read more about it in this article in English, but for now just run the IngressController installation:

 $ minikube addons enable ingress ingress was successfully enabled 

... and see what appeared in the cluster:

 $ kubectl get -n kube-system all NAME READY STATUS RESTARTS AGE !!! po/default-http-backend-vbrf3 1/1 Running 0 2m po/kube-addon-manager-minikube 1/1 Running 2 3h po/kube-dns-1301475494-7kk6l 3/3 Running 3 3h po/kube-dns-v20-g7hr9 3/3 Running 9 72d po/kube-registry-6nw7m 1/1 Running 0 3h po/kube-registry-proxy 1/1 Running 0 3h po/kubernetes-dashboard-9zsv8 1/1 Running 3 72d po/kubernetes-dashboard-f4tp1 1/1 Running 1 3h !!! po/nginx-ingress-controller-hmvg9 1/1 Running 0 2m po/tiller-deploy-3703072393-bdqn8 1/1 Running 0 1h NAME DESIRED CURRENT READY AGE !!! rc/default-http-backend 1 1 1 2m rc/kube-dns-v20 1 1 1 72d rc/kube-registry 1 1 1 3h rc/kubernetes-dashboard 1 1 1 72d !!! rc/nginx-ingress-controller 1 1 1 2m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE !!! svc/default-http-backend 10.0.0.131 <nodes> 80:30001/TCP 2m svc/kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 72d svc/kube-registry 10.0.0.142 <none> 5000/TCP 3h svc/kubernetes-dashboard 10.0.0.249 <nodes> 80:30000/TCP 72d svc/tiller-deploy 10.0.0.196 <none> 44134/TCP 1h NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/kube-dns 1 1 1 1 3h deploy/tiller-deploy 1 1 1 1 1h NAME DESIRED CURRENT READY AGE rs/kube-dns-1301475494 1 1 1 3h rs/tiller-deploy-3703072393 1 1 1 1h 

How to check? IngressController has a default-http-backend , which responds with a 404 error to all pages for which there is no handler. This can be seen with the following command:

 $ curl -i $(minikube ip) HTTP/1.1 404 Not Found Server: nginx/1.13.1 Date: Fri, 14 Jul 2017 14:29:46 GMT Content-Type: text/plain; charset=utf-8 Content-Length: 21 Connection: keep-alive Strict-Transport-Security: max-age=15724800; includeSubDomains; default backend - 404 

The result is positive - the answer comes from nginx with the line default backend - 404 .

Configuration Description for Helm


Now you can describe the configuration of the application. The basic configuration can be generated by the helm create _ :

 $ helm create symfony-demo $ tree symfony-demo symfony-demo/ ├── charts ├── Chart.yaml ├── templates │ ├── deployment.yaml │ ├── _helpers.tpl │ ├── ingress.yaml │ ├── NOTES.txt │ └── service.yaml └── values.yaml 

dapp expects this structure in a directory called .helm ( see the documentation ), so you need to rename symfony-demo to .helm .

We have now created a chart description. Chart is a configuration unit for Helm, you can think of it as a kind of package. For example, there is a chart for nginx, for MySQL, for Redis. And with the help of such charts you can assemble the desired configuration in a cluster. Helm puts in Kubernetes not individual images, but Chart's ( official documentation ).

The Chart.yaml file is a description of the chart of our application. Here you need to specify at least the application name and version:

 $ cat Chart.yaml apiVersion: v1 description: A Helm chart for Kubernetes name: symfony-demo version: 0.1.0 

File values.yaml - description of variables that will be available in templates. For example, in the generated file there is an image: repository: nginx . This variable will be available through this construction: {{ .Values.image.repository }} .

The charts directory is currently empty, because our application chart does not use external charts yet.

Finally, the templates directory - here are stored templates of YAML files with a description of resources for their placement in the cluster. The generated templates are not very necessary, so you can view and delete them.

First, let's describe a simple Deployment option for our application:

 apiVersion: extensions/v1beta1 kind: Deployment metadata: name: {{ .Chart.Name }}-backend spec: replicas: 1 template: metadata: labels: app: {{ .Chart.Name }}-backend spec: containers: - command: [ '/opt/start.sh' ] image: {{ tuple "symfony-demo-app" . | include "dimg" }} imagePullPolicy: Always name: {{ .Chart.Name }}-backend ports: - containerPort: 8000 name: http protocol: TCP env: - name: KUBERNETES_DEPLOYED value: "{{ now }}" 

In the configuration it is described that we need one replica so far, and in the template indicated which scans should be replicated. This description indicates the image that will be launched and the ports that are available to other containers in the hearth.

Mentioned in the config .Chart.Name is the value from charts.yaml .

The KUBERNETES_DEPLOYED needed for Helm to update the slots if we update the image without changing the tag. This is convenient for debugging and local development.

Next, we describe the Service :

 apiVersion: v1 kind: Service metadata: name: {{ .Chart.Name }}-srv spec: type: ClusterIP selector: app: {{ .Chart.Name }}-backend ports: - name: http port: 8000 protocol: TCP 

With this resource, we create a symfony-demo-app-srv DNS record by which other Deployments can access the application.

These two descriptions are combined through --- and written in .helm/templates/backend.yaml , after which you can deploy the application!

First patch


Now everything is ready to run dapp kube deploy (for more information about the command, see the documentation ):

 $ dapp kube deploy :minikube --image-version kube_test Deploy release symfony-demo-default [RUNNING] Release "symfony-demo-default" has been upgraded. Happy Helming! LAST DEPLOYED: Fri Jul 14 18:32:38 2017 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE symfony-demo-app-backend 1 1 1 0 7s ==> v1/Service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE symfony-demo-app-srv 10.0.0.173 <none> 8000/TCP 7s Deploy release symfony-demo-default [OK] 7.02 sec 

We see that the cluster appears under the state of ContainerCreating :

 po/symfony-demo-app-backend-3899272958-hzk4l 0/1 ContainerCreating 0 24s 

... and after a while everything works:

 $ kubectl get all NAME READY STATUS RESTARTS AGE po/hello-minikube-938614450-zx7m6 1/1 Running 3 72d !!! po/symfony-demo-app-backend-3899272958-hzk4l 1/1 Running 0 47s NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/hello-minikube 10.0.0.102 <nodes> 8080:31429/TCP 72d svc/kubernetes 10.0.0.1 <none> 443/TCP 72d !!! svc/symfony-demo-app-srv 10.0.0.173 <none> 8000/TCP 47s NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/hello-minikube 1 1 1 1 72d deploy/symfony-demo-app-backend 1 1 1 1 47s NAME DESIRED CURRENT READY AGE rs/hello-minikube-938614450 1 1 1 72d !!! rs/symfony-demo-app-backend-3899272958 1 1 1 47s 

Created ReplicaSet , Pod , Service , that is, the application is running. This can be checked "in the old fashioned way" by entering the container:

 $ kubectl exec -ti symfony-demo-app-backend-3899272958-hzk4l bash root@symfony-demo-app-backend-3899272958-hzk4l:/# curl localhost:8000 

Open access


Now, to make the application available at $(minikube ip) , $(minikube ip) 's add the resource Ingress . To do this, we describe it in .helm/templates/backend-ingress.yaml as follows:

 apiVersion: extensions/v1beta1 kind: Ingress metadata: name: {{ .Chart.Name }} annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: / backend: serviceName: {{ .Chart.Name }}-srv servicePort: 8000 

serviceName must match the name of the Service , which was declared in backend.yaml . Deploy the application again:

 $ dapp kube deploy :minikube --image-version kube_test Deploy release symfony-demo-default [RUNNING] Release "symfony-demo-default" has been upgraded. Happy Helming! LAST DEPLOYED: Fri Jul 14 19:00:28 2017 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE symfony-demo-app-srv 10.0.0.173 <none> 8000/TCP 27m ==> v1beta1/Deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE symfony-demo-app-backend 1 1 1 1 27m ==> v1beta1/Ingress NAME HOSTS ADDRESS PORTS AGE symfony-demo-app * 192.168.99.100 80 2s Deploy release symfony-demo-default [OK] 3.06 sec 

Appeared v1beta1/Ingress ! Let's try to access the application through IngressController . This can be done through the IP cluster:

 $ curl -Lik $(minikube ip) HTTP/1.1 301 Moved Permanently Server: nginx/1.13.1 Date: Fri, 14 Jul 2017 16:13:45 GMT Content-Type: text/html Content-Length: 185 Connection: keep-alive Location: https://192.168.99.100/ Strict-Transport-Security: max-age=15724800; includeSubDomains; HTTP/1.1 403 Forbidden Server: nginx/1.13.1 Date: Fri, 14 Jul 2017 16:13:45 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Host: 192.168.99.100 X-Powered-By: PHP/7.0.18-0ubuntu0.16.04.1 Strict-Transport-Security: max-age=15724800; includeSubDomains; You are not allowed to access this file. Check app_dev.php for more information. 

In general, we can assume that the deployment of the application in Minikube succeeded. From the request, it is clear that the IngressController is sending to port 443 and the application responds that you need to check app_dev.php . This is already the specificity of the selected application (symfony), because in the web/app_dev.php it is easy to notice:

 // This check prevents access to debug front controllers that are deployed by // accident to production servers. Feel free to remove this, extend it, or make // something more sophisticated. if (isset($_SERVER['HTTP_CLIENT_IP']) || isset($_SERVER['HTTP_X_FORWARDED_FOR']) || !(in_array(@$_SERVER['REMOTE_ADDR'], ['127.0.0.1', 'fe80::1', '::1']) || php_sapi_name() === 'cli-server') ) { header('HTTP/1.0 403 Forbidden'); exit('You are not allowed to access this file. Check '.basename(__FILE__).' for more information.'); } 

To see the normal page of the application, you need to deploy the application with a different setting or comment out this block for the tests. Repeated deployment in Kubernetes (after edits in the application code) looks like this:

 $ dapp dimg build ... Git artifacts: latest patch ... [OK] 1.86 sec signature: dimgstage-symfony-demo:13a2487a078364c07999d1820d4496763c2143343fb94e0d608ce1a527254dd3 Docker instructions ... [OK] 1.46 sec signature: dimgstage-symfony-demo:e0226872a5d324e7b695855b427e8b34a2ab6340ded1e06b907b165589a45c3b instructions: EXPOSE 8000 $ dapp dimg push --tag-branch :minikube ... symfony-demo-app-kube_test: digest: sha256:eff826014809d5aed8a82a2c5cfb786a13192ae3c8f565b19bcd08c399e15fc2 size: 2824 pushing image `localhost:5000/symfony-demo:symfony-demo-app-kube_test` [OK] 1.16 sec localhost:5000/symfony-demo:symfony-demo-app-kube_test [OK] 1.41 sec $ dapp kube deploy :minikube --image-version kube_test $ kubectl get all !!! po/symfony-demo-app-backend-3438105059-tgfsq 1/1 Running 0 1m 

Under the re-create, you can go to the browser and see a beautiful picture:



Total


With the help of Minikube and Helm, you can test your applications in a Kubernetes cluster, and dapp will help in building, deploying your Registry and the application itself.

The article does not mention secret variables that can be used in templates for private keys, passwords and other sensitive information. We will write about this separately.

PS


Read also in our blog:

Source: https://habr.com/ru/post/336170/


All Articles