📜 ⬆️ ⬇️

I need to raise the Kubernetes cluster, but I'm just a code programmer. There is an exit



Good day. Another note from my experience. This time, it’s superficially about the basic infrastructure that I use if I need to unload something, and there are no devOps around . But the current level of abstraction, in technology, has been allowing for about a year to live with this infrastructure, raised overnight, using the Internet and ready-made things.

Keywords - AWS + Terraform + kops . If it is useful to me, it may be useful to someone else. Welcome to the comments.

-one. What we're dealing with


The classic situation is that the project has been written to the stage when it is necessary to unload it somewhere and start using it. And the project is more complicated than a simple html page. I would like the possibility of horizontal scaling, the identity of the environment on the local, test, prod stands and a more or less normal deployment process.
It's about the app on Laravel , to show the whole process from beginning to end. But in the same way, you can deploy a scattering of services on go, python applications, small sites on WP, html pages and much more. This is enough to some level, and then a separate person appears in the team, who will improve and complement it.
Last time, I came to the conclusion that on local machines I install GoLand, PhpStorm, Docker, Git and are completely ready for work. Yes, and manage from one machine, you can scatter clusters, so the whole process will be described without taking into account the OS on which you work, packing all things in a docker container.
')

0. Getting ready for work.


Let's imagine that we have already registered an account on AWS , asked through technical support to increase the account limits for the number of simultaneously running servers, created the IAM user and now we have the Access Key + Secret Key . The zone is us-east-1 .

What do we need on the local computer? AWS CLI , Terraform for declarative management of AWS , kubectl , kops for setting up a cluster and Helm , for deploying some services. We collect Dockerfile (which I have long found somewhere in the githabra, but I can not find where). We write our docker-compose.yml for mount directories and a Makefile for aliases.

Dockerfile
FROM ubuntu:16.04 ARG AWSCLI_VERSION=1.12.1 ARG HELM_VERSION=2.8.2 ARG ISTIO_VERSION=0.6.0 ARG KOPS_VERSION=1.9.0 ARG KUBECTL_VERSION=1.10.1 ARG TERRAFORM_VERSION=0.11.0 # Install generally useful things RUN apt-get update \ && apt-get -y --force-yes install --no-install-recommends \ curl \ dnsutils \ git \ jq \ net-tools \ ssh \ telnet \ unzip \ vim \ wget \ && apt-get clean \ && apt-get autoclean \ && apt-get autoremove \ && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Install AWS CLI RUN apt-get update \ && apt-get -y --force-yes install \ python-pip \ && pip install awscli==${AWSCLI_VERSION} \ && apt-get clean \ && apt-get autoclean \ && apt-get autoremove \ && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* # Install Terraform RUN wget -O terraform.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip \ && unzip terraform.zip \ && mv terraform /usr/local/bin/terraform \ && chmod +x /usr/local/bin/terraform \ && rm terraform.zip # Install kubectl ADD https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl /usr/local/bin/kubectl RUN chmod +x /usr/local/bin/kubectl # Install Kops ADD https://github.com/kubernetes/kops/releases/download/${KOPS_VERSION}/kops-linux-amd64 /usr/local/bin/kops RUN chmod +x /usr/local/bin/kops # Install Helm RUN wget -O helm.tar.gz https://storage.googleapis.com/kubernetes-helm/helm-v${HELM_VERSION}-linux-amd64.tar.gz \ && tar xfz helm.tar.gz \ && mv linux-amd64/helm /usr/local/bin/helm \ && chmod +x /usr/local/bin/helm \ && rm -Rf linux-amd64 \ && rm helm.tar.gz # Create default user "kops" RUN useradd -ms /bin/bash kops WORKDIR /home/kops USER kops # Ensure the prompt doesn't break if we don't mount the ~/.kube directory RUN mkdir /home/kops/.kube \ && touch /home/kops/.kube/config 

docker-compose.yml
 version: '2.1' services: cluster-main: container_name: cluster.com image: cluster.com user: root stdin_open: true volumes: - ./data:/data - ./.ssh:/root/.ssh - ./.kube:/root/.kube - ./.aws:/root/.aws cluster-proxy: container_name: cluster.com-kubectl-proxy image: cluster.com user: root entrypoint: kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='.*' ports: - "8001:8001" stdin_open: true volumes: - ./data:/data - ./.ssh:/root/.ssh - ./.kube:/root/.kube - ./.aws:/root/.aws 

Makefile
 docker.build: docker build -t cluster.com . docker.run: docker-compose up -d docker.bash: docker exec -it cluster.com bash 


Dockerfile - we take a basic image of ubuntu and install all the software. Makefile - just for convenience, you can use the usual alias mechanism. Docker-compose.yml - we have added an additional container, which we will send to the K8S Dashboard browser if we need to visually see something.

Create data , .ssh , .kube , .aws folders in the root and put our config for aws, ssh keys there and we can collect and run our container via make docker.build & make docker.run .

Well, in the data folder, create a folder in which we put the k8s yaml files, and next to the second, in which we will store the state of the cluster terraform . The approximate result of this stage put on githab .

1. Raise our cluster.


Further there will be a free translation of this note. I will omit many theoretical points, I will try to describe a brief squeeze. All the same, the format of my notes - tldr.

In our data / aws-cluster-init-kops-terraform folder, we clone what lies in this repository and go to the container console via make docker.bash . A scattering of boring teams begins.

AWS CLI


Create a kops user, add permissions and reconfigure the AWS CLI on it, so as not to run commands from the superuser.

 aws iam create-group --group-name kops #   aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AWSCertificateManagerFullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops aws iam create-user --user-name kops aws iam add-user-to-group --user-name kops --group-name kops aws iam create-access-key --user-name kops 

 aws configure 

Initializing Terraform


Change the name of the cluster in the data / aws-cluster-init-kops-terraform / variables.tf file. Do not forget to take our dns servers from the update.json file and update them where you bought your domain.

 #    cd /data/aws-cluster-init-kops-terraform #    AWS CLI export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id) export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key) #  terraform terraform init terraform get terraform apply #  NS  cat update-zone.json \ | jq ".Changes[].ResourceRecordSet.Name=\"$(terraform output name).\"" \ | jq ".Changes[].ResourceRecordSet.ResourceRecords=$(terraform output -json name_servers | jq '.value|[{"Value": .[]}]')" \ > update-zone.json 

Kops


We create a cluster through kops , exporting a config to a .tf file.

 export NAME=$(terraform output cluster_name) export KOPS_STATE_STORE=$(terraform output state_store) export ZONES=$(terraform output -json availability_zones | jq -r '.value|join(",")') kops create cluster \ --master-zones $ZONES \ --zones $ZONES \ --topology private \ --dns-zone $(terraform output public_zone_id) \ --networking calico \ --vpc $(terraform output vpc_id) \ --target=terraform \ --out=. \ ${NAME} 

A small remark is needed here. Terraform will create a VPC , and we will need to tweak a little the config that kops will give us . This is done quite simply, through the auxiliary image ryane / gensubnets: 0.1
 #   terraform output -json > subnets.json 

 #     echo subnets.json | docker run --rm -i ryane/gensubnets:0.1 

You can add policies for route53 right away.

 additionalPolicies: master: | [ { "Effect": "Allow", "Action": ["route53:ListHostedZonesByName"], "Resource": ["*"] }, { "Effect": "Allow", "Action": ["elasticloadbalancing:DescribeLoadBalancers"], "Resource": ["*"] }, { "Effect": "Allow", "Action": ["route53:ChangeResourceRecordSets"], "Resource": ["*"] } ] node: | [ { "Effect": "Allow", "Action": ["route53:ListHostedZonesByName"], "Resource": ["*"] }, { "Effect": "Allow", "Action": ["elasticloadbalancing:DescribeLoadBalancers"], "Resource": ["*"] }, { "Effect": "Allow", "Action": ["route53:ChangeResourceRecordSets"], "Resource": ["*"] } ] 

We edit through kops edit cluster $ {NAME} .



Now we can raise the cluster itself.

 kops update cluster \ --out=. \ --target=terraform \ ${NAME} terraform apply 

Everything will go well, the context of kubectl will change. In the data / aws-cluster-init-kops-terraform folder we will have the cluster state stored. You can simply put everything in git and send it to a private bitbeta repository.

 $ kubectl get nodes NAME STATUS AGE ip-10-20-101-252.ec2.internal Ready,master 7m ip-10-20-103-232.ec2.internal Ready,master 7m ip-10-20-103-75.ec2.internal Ready 5m ip-10-20-104-127.ec2.internal Ready,master 6m ip-10-20-104-6.ec2.internal Ready 5m 

2. Raise our application


Now that we have something, we can deploy our services in a cluster. I will put approximate configs in the same repository . They can be put in data / k8s .

Service jokes


Let's start with the service stuff. We need helm , route53 , storage-classes and access to our private registry on hub.docker.com . Well, or to any other, if there is such a desire.

 # Init helm kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' helm init 

 kubectl apply -f default-namespace.yaml kubectl apply -f storage-classes.yaml kubectl apply -f route53.yaml kubectl apply -f docker-hub-secret.yml kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml 

PostgreSQL + Redis


I burned a lot of times, using the docker not for stateless containers, but the latter configuration has shown itself to be the most suitable. I use Stolon to provide scalability. About a year the flight is normal.

Deploy helm-charts and a couple of quick Redis config files.

 #  etcd  stolon cd etcd-chart helm install --name global-etcd . #   stolon cd stolon-chart helm dep build helm install --name global-postgres . #  redis kubectl apply -f redis 

Nginx + PHP


Normal bunch. Nginx and php-fpm . I especially did not clean out the configs, but everyone can customize it for themselves. Before applying, you must specify the image from which we will take the code + add a line of the certificate from the AWS Certificate Manager . The php itself can be taken from dock-hab, but I collected my private one by adding some libraries.

 kubectl apply -f nginx kubectl apply -f php 

In our image with the code, we store it in the / crm-code folder. We substitute for our image and it will work correctly. The file is nginx / deployment.yml .



We display the domain. Route53 service will pick it up, change / add DNS records, the certificate will be uploaded to the ELB from AWS Certificate Manager . The file is nginx / service.yml .



We forward env variables in php to have them inside and connect to PostgreSQL / Redis . The file is php / deployment.yml .



As a result, we have a K8S cluster, which at the basic level we can scale, add new services, new servers (nodes), change the number of PostgreSQL, PHP, Nginx instances and live before a separate person appears in the team to do it .

As part of this small note, I will not touch upon the backups / monitoring of this whole good. At the initial stage, localhost: 8001 / ui from the K8S Dashboard service will be enough. Later it will be possible to screw Prometheus , Grafana , Barman , or any other similar solutions.

Using the terminal or Teamcity , Jenkins code update will be done like this.

 #     -      Teamcity docker build -t GROUP/crm-code:latest . docker push GROUP/crm-code:latest #   (  ) kubectl set image deployment/php-fpm php-fpm=GROUP/php-fpm kubectl rollout status deployment/php-fpm kubectl set image deployment/php-fpm php-fpm=GROUP/php-fpm:latest kubectl set image deployment/nginx nginx=danday74/nginx-lua kubectl rollout status deployment/nginx kubectl set image deployment/nginx nginx=danday74/nginx-lua:latest kubectl rollout status deployment/php-fpm kubectl rollout status deployment/nginx 

I would be glad if it would be interesting to someone and doubly glad if it helps someone. Thank you for your attention. Once again I attach a link to the repository one and two .

Source: https://habr.com/ru/post/423481/


All Articles