I worked in different companies that use microservices. And they ran them in docker containers. Now I am working with a project that, although a monolith, is still more convenient to launch in a container.
On the one hand, Docker is a very versatile tool, it can be easily and effectively used for solving a huge number of tasks. It is clear and it seems that everything is elementary. But on the other hand, if you do not spend your time and resources to “pump out” in its proper use, you are more likely to over-complicate simple things. And of course, you will think that you are right, and Docker is a worthless, cumbersome garbage, which is not suitable for solving your unique task.
Usually, in a standard company, the process of working on any task looks like this:
- Git push is done with our commit
- Some system is triggered, be it Jenkins, TeamCity, etc.
- The job job pipeline is started, in which third-party libraries are downloaded, the project is compiled, tests are run.
- A docker image is created with the collected project (ADD.) And pushed to the remote docker registry
- Somehow git pull (chef, puppet, manually through docker-compose) is done on the remote server and the container is started.
Intuitively, I always felt that it was somehow too complicated. This process is proudly called CI / CD and I'm already tired of such clever people who do not doubt that this is the easiest way.
')
For the end user, it looks like this: according to a push to the git repository, somewhere that was in the commit takes place.
What I do not like about this approach.
- The only way to deploy the system on a remote server is to go through all 5 steps.
- In step 3, you may need access keys to private libraries. The process can be long, if the caching of previously downloaded libraries is not configured.
- You need to prepare a Dockerfile, decide on the image (FROM ...), decide how we will tag the image and need access to the repository, into which we will push the image.
- I need my own repository, configure https. After all, the docker client works only on https.
The fourth point, of course, is done once and maybe it should not be added.
But isn't the word Docker mentioned many times already at the release stage?
Consider: why are we dragging all this Docker ahead of time? Because it is considered that the container is convenient and “Well, everything was fine, it works. What do you start then? ”.
So, for such people I can say - docker containers are not a panacea and not the only environment in which your application can run. Project written in python, php, js, swift, scala / java, etc. you can run more on:
- remote virtual machine
- on lokalkhost without any virtualization and docker containers.
Suddenly :)
Let's imagine that we are doing a service that will work on nodeJS.
The result of this project (or as I call the 'artifact') will be a set of js files (the service itself) + node_modules (third-party libraries that are used in the service).
Suppose we are convinced that the service is working and want to run it remotely, so that our testers can test it for functionality.
How do you like this idea:
- We do .tar.gz with our project and fill it into ... remote storage of artifacts! (These are also called “binary repository” repositories).
- We say the url that can download our service and start testing.
Next, testers can start the service or locally at home if they have everything, or do a Dockerfile, in which the artifact will be downloaded and just run the container. Well, or something else.
I’ll say right away, if you don’t want testers to launch docker containers and generally “it’s not their job” to launch, then use a tool that will collect images as soon as new artifacts appear in the binary repository (use a web hook, drive periodically over the crown).
Now from the binary repositories there are:
- Sonatype nexus
- Artifactory
Nexus is easy to use, it has a bunch of different repositories that you can create (npm, maven, raw, docker), so I use it.
It's a damn simple idea, why haven't I read about it anywhere? On the Internet, do not count the articles “like a git push somewhere a container unfolds in some kubernetes”. From such complex algorithms hair stand on end.
The purpose of this article is to say that it is not necessary in one process to build the project and add it to the docker image.
Separate and conquer!
Build the project, publish the artifacts in a downloadable place. (Docker registry is not the only place where you can store your project, choose universal ways of delivering artifacts to servers).
A separate tool to deliver artifacts to the server where your project will work.
It's very simple, give a choice to others: use docker, run in kubernetes or use any other tool to launch artifacts. No need to impose the use of technology despite the fact that it seems to you very comfortable and fashionable.
Good luck in launching your projects!