Many books on joinery and carpentry begin with a story about the proper organization of the workplace and the tool. I want to believe that development skills are also culture and skill. A rational approach to the working environment allows you to reduce the cost of development and subsequent refinement of the project due to early detection of problems and increase developer productivity. The topic, of course, is extensive, and I plan to write a series of articles:
After reading this cycle, the reader should get an idea of ββthe approaches used in modern development and testing of projects at various levels, from utilities to distributed cluster systems. This article deals with the tools and the simplest sandbox. If the topic is interesting, I will continue the cycle of cluster systems on Erlang, Golang, Elixir, Rust.
Note: To work successfully with this article, docker , docker-compose and GNU Make must be installed on your machine. Docker installation does not take much time, you just need to remember that your user should be added to the docker group.
This code is tested only on debian-like distributions.
So let's try to create an atomic counter (atomic counter). The application must perform the following functions:
And meet the requirements:
The text of the article sets out the principles and motivation for making certain decisions, but there are no examples or citations of the code. All code is available in the repository: https://github.com/Vonmo/acounter
Currently, there are many ways to solve the problem of virtualization environment. All of them have their advantages and disadvantages, and it is often quite difficult to choose. For developing server and distributed software, I propose to use Docker containerization, since docker is a flexible and modern tool that allows you to reduce equipment costs during the development phase, improve testing processes and in some cases simplify application delivery to end users. For example, there is no problem to run a cluster of 12-15 containers with various services on an average laptop, simulate the interaction of these services and write integration tests in an environment close to combat, and also check the scaling of your services or process and test failures, including major accidents and recovery after them.
Note: docker and docker-compose are offered as a solution for the development phase: the working environment of programmers, staging for testers. Justifying the basis for a combat environment is beyond the scope of this article.
Since our environment is divided into 2 levels, a host and containers, we need two makefiles:
In docker-compose.yml there is a description of all the containers in our cluster:
Note: if it is necessary to guarantee the operation of your application on other versions of erlang, the base image is supplemented with these versions. In the base image, kerl is already installed and all we have to do is add the required version of erlang to the base image, and additional lines for all versions of erlang and run tests on them in the makefile.
To manage the virtual environment in the makefile, the following targets are predefined:
$ make build_imgs
- creates the necessary docker images$ make up
- runs and configures containers$ make down
- cleans the test environmentMany programs that have to be analyzed and developed have third-party dependencies. This can be either a dependency on code in the form of a third-party library, or a dependency on a utility, for example, a database migration utility.
Dependencies on utilities and their versions, as well as on binary libraries, we have already decided in the last paragraph. Now let's take a quick look at the dependency management process and the build in Erlang. The most popular of the de facto methods existing in erlang are erlang.mk and rebar. Since I use rebar in everyday practice, weβll stop at it.
The main functions of rebar:
The following targets are defined for building and testing the makefile:
$ make tests
- builds a test application profile and runs all tests.$ make rel
- collects the final releaseThe standard approach in engineering practice is testing. Virtually all of the objects around us were developed using tests in one form or another. There are two basic testing frameworks in the erlang world: eunit and common test (hereinafter referred to as CT). Both of these tools allow you to test almost all aspects of the designed system, the only question is the complexity of the tool itself and the preparatory manipulations before the actual launch of the tests. Eunit offers a modular testing path, and the common test is a more flexible and versatile tool with an emphasis on integration testing.
There is a clear hierarchy of testing process in CT. Specifications allow you to customize all aspects of running tests. They are followed by sets in which groups of test cases are combined into a logically complete unit. Within the test group, we can also customize the test run order and parallelism, and flexibly configure the test environment.
The flexibility to configure the test environment lies in the three-level model of initialization and completion of test cases:
init_per_suite/end_per_suite
- called once when starting a specific setinit_per_group/end_per_group
- called once for a given groupinit_per_testcase/end_per_testcase
- called before each test in the group.Surely everyone who developed through tests and applied eunit, faced with a situation where the floating test fell off, and because of this, there were, for example, loaded applications in the test environment that break the initialization of the following tests. Due to the flexibility of CT, it is possible to correctly handle such situations along with many others, as well as to reduce the run time of all tests due to the thoughtful initialization of the environment.
So why do we need xref? In short, to identify dependencies between functions, applications and releases, as well as to detect dead code points.
In large projects, it often happens that some kind of code becomes dead. There are a lot of reasons: for example, we wrote function A in module X, then it moved to module Z under the name A2, all the tests passed successfully, and the developer forgot about X: A. Since the function A was exported, the compiler did not tell us that X: A is not used. Of course, the sooner we remove the dead code, the less will be the code base and, accordingly, the cost of maintaining it.
How does xref work? It checks all calls and compares them with certain functions in the modules. If the function is defined but not enabled anywhere, there will be a warning. There is also a usage scenario when we need to find out all the places where a particular method is used.
To use xref in a work environment, the purpose is predetermined:
$ make xref
In the last paragraph, we figured out how to identify dependencies and unused functions. And what if the function is, is it used, but the arity (the number of arguments) or the arguments themselves do not meet the definition? Or, for example, cases of never executed branches in case and if statements, redundant checks in security expressions, or inconsistency of the type declaration. The dialyzer is used to search for such discrepancies.
To use dialyzer in a work environment, a predetermined goal:
$ make dialyzer
Each team decides for itself which design standards to follow, and whether to follow at all. Most large projects try to adhere to the design standards, because this practice removes a number of problems with the support of the code base.
Due to the fact that there is no one universal IDE for Erlang, since someone loves emacs, someone vim or sublime, the problem of automatic verification arises. Fortunately, there is an interesting project elvis, which allows you to follow design standards without wars within the team.
For example, we agreed that before pushing to the repository we run a check for matching styles.
To use elvis the purpose is predetermined:
$ make lint
$ git clone https://github.com/Vonmo/acounter.git
$ make build_imgs
$ make up
$ make tests
$ make rel
$ ./_build/prod/rel/acounter/bin/acounter console
In conclusion, I would like to thank the readers for their patience and interest in the topic. We managed to get a working sandbox in a short period of time, which allows us to simplify and stabilize the development process. In the following articles, I will try to explain how this sandbox can be extended to the needs of developing distributed and multi-component systems.
Erlang is not the most popular language, but it is great for server software and soft realtime systems. And I would like to at least revive this topic on HabrΓ©.
Source: https://habr.com/ru/post/346254/