Last week I went to the DUMP IT conference (https://dump-ekb.ru/) in Yekaterinburg and I want to tell you about what was said in the Backend and Devops sections, and whether the regional IT conferences are worth attention.
Nikolay Sverchkov from Evil Martians about ServerlessWhat was there at all?
There were 8 sections in total: Backend, Frontend, Mobile, Testing and QA, Devops, Design, Science and Management.
The largest rooms, by the way, are in Science and Management)) For ~ 350 people each. Backend and Frontend are slightly smaller. Hall Devops was the smallest, but active.
')
I listened to reports in the Devops and Backend sections and talked a bit with the speakers. I want to talk about the disclosed topics and make an overview of these sections at the conference.
Representatives of SKB-Kontur, DataArt, Evil Martians, Ekaterinburg Web-Studio Flag, Miro (RealTimeBoard) made presentations in the Devops and Backend sections. Topics related to CI / CD, work with queue services, logging, Serverless topics and work with PostgreSQL on Go were well covered.
There were also reports from Avito, Tinkoff, Yandex, Jetstyle, Megafon, Ak Bars Bank, but I physically did not have time to visit them (videos and slides of reports are not yet available, they promise to post on 2 weeks on dump-ekb.ru).
Devops section
What was surprising - the section was held in the smallest room, about 50 seats. People were even in the aisles :) I'll tell you about the reports that I managed to listen to.
Petabyte Elastic
The section began with a report by Vladimir Leela (SKB-Contour) about Elasticsearch in Contour. They have a sufficiently large and loaded Elastic (~ 800 TB of data, ~ 1.3 petabytes, taking into account redundancy). Elasticsearch for all Contour services is one, consists of 2 clusters (of 7 and 9 servers), and is so important that there is a special Elasticsearch engineer in the Contour (actually, Vladimir himself).
Vladimir also shared his thoughts on the benefits of Elasticsearch and the problems that it delivers.
Benefit:
- All logs in one place, easy access to them
- Storing logs for a year and easy analysis
- High speed with logs
- Cool data visualization out of the box
Problems:
- message broker - must have (in Kontura, Kafka plays its role)
- features of working with Elasticsearch Curator (periodically generated high load from regular tasks in Curator)
- There is no built-in authorization (only for some, quite a lot of money, or as open source plug-ins of varying degrees of readiness for production)
About the Open Distro for Elasticsearch reviews were only positive :) The same authorization issue was resolved there.
Where does petabyte come from?Their nodes consist of servers with 12 * 8 Tb SATA + 2 * 2 Tb SSD. Cold storage on SATA, SSD only for hot cache.
7 + 9 servers, (7 + 9) * 12 * 8 = 1536 Tb.
Part of the space in the reserve, laid on redundancy, etc.
Logs of about 90 applications are sent to Elasticsearch, including all reporting services of Kontur, Elba, etc.
Features of development on Serverless
Next, the report of Ruslan Serkin from DataArt on Serverless.
Ruslan talked about what development is with the Serverless approach in general, and what are its features.
Serverless is a development approach in which developers do not touch the infrastructure in any way. Example - AWS Lambda Serverless, Kubeless.io (Serverless inside Kubernetes), Google Cloud Functions.
The ideal Serverless application is simply a function that sends a request to the Serverless provider through a special API Gateway. An ideal microservice, while in the same AWS Lambda a large number of modern programming languages ​​are supported. The cost of supporting and deploying the infrastructure becomes zero in the case of cloud providers, support for small applications will also be very cheap (AWS Lambda - $ 0.2 / $ 1 million simple queries).
The scalability of such a system is almost perfect - the cloud provider takes care of this himself, Kubeless scales automatically within the Kubernetes cluster.
There are disadvantages:
- developing large applications becomes more difficult
- there is a difficulty with profiling applications (only logs are available to you, but not profiling in the usual sense)
- no version
Frankly speaking, I heard about Serverless several years ago, but all these years I did not understand how to use it correctly. After the report of Ruslan, understanding emerged, and after the report of Nikolai Sverchkov (Evil Martians) from the Backend section was fixed. No wonder I went to the conference :)
CI for the poor, or is it worth it to write your CI for a web studio
Mikhail Radionov, the head of the web-studio Flag from Ekaterinburg, spoke about the self-signed CI / CD.
His studio went from “manual CI / CD” (went to the server via SSH, did git pull, repeat 100 times a day) to Jenkins and to a samopisny tool that allows you to control the code and execute releases called Pullkins.
Why did not suit Jenkins? It did not give enough default flexibility and was too complicated to customize.
“Flag” is developed on Laravel (PHP framework). In developing the CI / CD server, Michael and his colleagues used the built-in Laravel mechanisms called Telescope and Envoy. The result is a php server (note) that processes incoming webhook requests, can build frontend, backend, deploy to different servers and report to Slack.
Further, in order to be able to perform blue / green deploy, to have uniform settings in the dev-stage-prod environments, they switched to Docker. The advantages remained the same, the possibilities of homogenizing the environment and seamless deployment were added, and the need was added to study the Docker to work with it correctly.
The project is on GithubHow we reduced the number of server release rollbacks by 99%
The last report in the Devops section was from Viktor Eremchenko, Lead devops engineer at Miro.com (formerly RealTimeBoard).
At the core of RealTimeBoard, the main product of the Miro team, is a monolithic Java application. Collecting, testing and deploying it without downtime is a difficult task. In this case, it is important to execute such a version of the code so that it does not have to be rolled back (a heavy monolith is the same).
On the way to building a system that allows you to do this, Miro has gone the way, including working on the architecture, tools used (Atlassian Bamboo, Ansible, etc), and working on team building (they now have a dedicated Devops team + many separate Scrum commands from developers of different profiles).
The path turned out to be difficult and thorny, and Victor shared his decisions, the accumulated pain and the optimism that did not end there.
Won the book for questionsBackend section
I had time for 2 reports - from Nikolay Sverchkov (Evil Martians), also about Serverless, and from Grigoriy Koshelev (Kontur company) about telemetry.
Serverless for mere mortals
If Ruslan Sirkin talked about what Serverless is, Nikolay showed simple applications using Serverless and talked about the details that affect the cost and speed of applications in AWS Lambda.
An interesting detail: the minimum paid element is 128 Mb of memory and 100 ms CPU, it costs $ 0.000000208. At the same time, 1 million such requests per month is free.
Some of Nikolai’s functions often went beyond the limit of 100 ms (the main application was written in Ruby), so rewriting them to Go yielded excellent savings.
Vostok Hercules - make telemetry great again!
The latest report of the Backend section from Grigory Koshelev (contour company) about telemetry. Telemetry is a log, metrics, tracing applications.
The contour uses handwritten tools laid out on Github for this. The tool from the report - Hercules,
github.com/vostok/hercules , is used to deliver telemetry data.
In the report of Vladimir Leela in the Devops section, the storage and processing of logs in Elasticsearch was considered, but there is still the task to deliver logs from many thousands of devices and applications, and they are solved by tools like Vostok Hercules.
The contour passed a way known to many - from RabbitMQ to Apache Kafka, but not everything is so simple)) They had to add to the Zookeeper, Cassandra and Graphite scheme. I will not fully disclose the information on this report (not my profile), if you're interested - you can wait for the slides and videos on the conference website.
How is it compared to other conferences?
I can’t compare it with conferences in Moscow and St. Petersburg, I can compare it with other events in the Urals and with 404fests in Samara.
DAMP is held in 8 sections, it is a record for the Ural conferences. Very large sections of Science and Management, this is also unusual. The audience in Yekaterinburg is rather structured - there are large development departments in Yandex, Kontur, Tinkoff, and this leaves an imprint on the reports.
Another interesting point is that many companies had 3-4 speakers at the conference at once (as was the case at Contour, Evil Martians, Tinkoff). Many of them were sponsors, but the reports are quite on a par with others, these are not promotional reports.
To go or not to go? If you live in the Urals or nearby, you have the opportunity and interesting topics - yes, of course. If you are thinking about a long trip - I would look at the topics of reports and video reports from previous years
www.youtube.com/user/videoitpeople/videos and make a decision.
Another plus of conferences in the regions, as a rule, is that it is easy to talk to the speaker after the presentations; there are less contenders for such communication.

Thanks to Damp and Yekaterinburg! )