In the life of any DevOps engineer it becomes necessary to create a playground for the development team. As always, he should be smart, nimble and consume the minimum amount of resources. In this article I want to talk about how I solved the problem of creating such a beast for a microservice application on kubernetes.

Incoming conditions and requirements
It is a little about what the system is for which it was necessary to create a playground:
- Kubernetes, bare-metal cluster;
- Simple api-gateway based on nginx;
- MongoDB as a DB;
- Jenkins as a CI server;
- Git on Bitbucket;
- Two dozen microservices that can communicate with each other (via api-gateway), with the base and with the user.

')
Requirements that we were able to formulate while actively communicating with the timlide:
- Minimization of resource consumption;
- Minimization of changes in the code of services for work on the playground;
- The possibility of parallel development of several services;
- Ability to develop multiple services in the same space;
- The ability to show changes to customers before deployment to staging;
- All developed services can work with one database;
- Minimizing developer efforts to deploy test code.
Reflections on the topic
From the very beginning it was clear that the most logical way to create parallel spaces in k8s is the most logical way to use the native virtual cluster tool, or in the terminology of k8s - namespaces. The task is also simplified by the fact that all interactions within the cluster are made using the short names provided by kube-dns, which means that the launch of the structure can be done in a separate namespace without losing connectivity.
This solution has only one problem - the need to deploy all available services in the namespace, which is long, inconvenient and consumes a large amount of resources.
Namespace and DNS
When creating any service, k8s creates a DNS record of the form
<service-name>. <Namespace-name> .svc.cluster.local . This mechanism allows communication through short names within one namespace due to changes made to resolv.conf of each launched container.
In its normal state, it looks like this:
search <namespace-name>.svc.cluster.local svc.cluster.local cluster.local
nameserver 192.168.0.2
options ndots:5
Ie, the service in the same namespace can be addressed by the name <service-name>, in the neighboring namespace by the name <service-name>. <Namespace-name>
We go around the system
At this moment, a simple thought comes to mind: "
General base, the api-gateway is engaged in routing requests to services, why not make it go first to the service in its namespace, and if it is not in default? "
Yes, such a solution could be organized with the namespace settings (we remember that it is nginx), but such a solution will cause a difference in the settings for pg and on other clusters, which is inconvenient and can cause a number of problems.
So, the string replacement method was chosen.
search <namespace-name>.svc.cluster.local svc.cluster.local cluster.local
On
search <namespace-name>.svc.cluster.local svc.cluster.local cluster.local default.svc.cluster.local
This approach will provide an automatic transition to the namespace default in the absence of the required service in its namespace.

A similar result can be achieved in a cluster as follows. Kubelet adds the search parameters to the container from the host’s resolve.conf, so it’s enough just to add a line to /etc/resolv.conf for each node:
search default.svc.cluster.local
If you do not want the nodes to resolve the addresses of the services, then you can use the --resolv-conf option when running kubelet, which allows you to specify any other file instead of /etc/resolv.conf. For example, the file /etc/k8s/resolv.conf with the same line.
The matter of technology
Further decision is quite simple, you only need to accept the following agreements:
- New features are developed in separate branches of the play / <feature-name> type
- To work with several services within one feature, branch names must match in the repositories of all involved services.
- Jenkins does all the deployment work automatically.
- For feature tests, the branches are available at <feature-name> .cluster.local
Ssl-offloader settings
Nginx config to redirect requests to api-gw in the corresponding namespace
server_name ~^(?<namespace>.+)\.cluster\.local; location / { resolver 192.168.0.2; proxy_pass http://api-gw.$namespace.svc.cluster.local; }
Jenkins
To automate the deployment process, the Jenkins
Pipeline Multibranch Plugin is used .
In the project settings, we specify to collect only the branches corresponding to the play / pattern. And we add Jenkinsfile to the root of all projects with which the builder will work.
For processing the groovy script is used, I will not give it entirely, just a couple of examples. The rest of the deployment is fundamentally no different from the usual.
Getting branch name:
def BranchName() { def Name = "${env.BRANCH_NAME}" =~ "play[/]?(.*)" Name ? Name[0][1] : null }
The minimal configuration of the namespace requires a deployed api-gateway, so we add a call to the project that creates the namespace and deploys the api-gateway to it:
def K8S_NAMESPACE = BranchName() build job: 'Create NS', parameters: [[$class: 'StringParameterValue', name: 'K8S_NAMESPACE', value: "${K8S_NAMESPACE}"]] build job: 'Create api-gw', parameters: [[$class: 'StringParameterValue', name: 'K8S_NAMESPACE', value: "${K8S_NAMESPACE}"]]
Conclusion
There is no silver bullet, but I was not able to find not only the best practices, but also descriptions of how the sandboxes are organized in others, so I decided to share the method I used when creating the k8s sandbox. Perhaps this is not an ideal way, so I’m happy to accept comments or stories about how this problem is solved in yours.