📜 ⬆️ ⬇️

ELK on Docker

I think many have read about using Elasticsearch, Logstash and Kibana to collect and analyze logs, but often articles start with a long manual on how to raise ELK services and get them to work together.
Here I want to talk about quick start with Docker.

I will write in advance that the article is aimed at those who are already familiar with Docker and have a desire to raise the ELK stack for familiarization or future use in production. And for those who do not know if they need ELK, I recommend reading the Kibana-mother article or Why do you need logs in general? .

So, ideally, the task is to find the container from ELK on hub.docker.com and run it. I suggest to do so with some modifications. In the example, let's consider sending nginx logs to elasticSearch.

The abbreviation of ELK services includes the following tasks:
1) Processing incoming data and delivering it to Elasticsearch - Logstash service is responsible for this
2) Search engine and data access interface - Elasticsearch and Kibana are responsible for this
')
But for good Logstash should not be responsible for the delivery of data. Data transfer is delegated to the fourth service - Filebeat .

The general scheme of work is as follows:

image

The network may contain a different number of services from which data must be collected and the Filebeat service is the logging provider for the Logstash service.

In other words, I mean that we need to have another container with the Filebeat service.

Let's get down to business. I strongly recommend that you use the Docker Compose service - to describe the parameters in one YAML format file, it is much more convenient than executing commands with parameters each time. And at the debugging stage, it will be necessary to start and stop more than once.

1. Create a folder with the project, for example myElk, and create a file with the following name docker-compose.yml, which we will add.

2. We look for and find a container with a filebeat. I recommend to take this olinicola / filebeat or docker pull olinicola / filebeat
Setting up a container with a filebeat is to prepare a configuration file in YAML format for the filebeat service.
Here it will look like this:

prospectors: - paths: - "/etc/logs/nginx/access.log" document_type: nginx-access - paths: - "/var/log/nginx/error.log" document_type: nginx-error output: logstash: hosts: ["elk:5044"] tls: certificate_authorities: - /etc/pki/tls/certs/logstash-beats.crt timeout: 15 file: path: "/tmp/filebeat" 


In short, we take nginx logs from a specific server location and send them to the ELK server, which is ready to receive messages from us on port 5044.
In this configuration file, "elk" - the name of the linked container - is more detailed below.
I also additionally indicated uploading to a file, for easier debugging at the startup stage.

At this stage, you can already supplement docker-compose.yml with the following code:

 version: '2' services: filebeat: build: . image: [ imageId  filebeat] volumes: - /path/to/myElk/log/nginx:/etc/logs/nginx #  nginx - /path/to/myElk/filebeat:/etc/filebeat - /path/to/myElk/filebeat/tmp:/tmp/filebeat 


You can already start to lift the container with the docker-compose up command. And to see how when changing the access.log file, the data will be sent to the file "/ tmp / filebeat", only at this stage there is no elk container, so it is better to comment out the output logstash.

3. Ok, we have the first container with a filebeat. Now we need a second container with ELK. Go to hub.docker.com and find sebp / elk or execute the docker pull sebp / elk command.

Configure the ELK container.
The only thing you need to configure here is logstash, and here are 2 options: a) leave everything as it is and it will work, since logstash in this container is already configured to receive logs from the nginx server.
However, after you start, you will want to go on the path b), then there is a set up logs as you need. Because from the way the logs are transferred to elasticsearch, it will be more or less convenient for you to analyze the data coming from nginx.
So I will explain the logstash configuration files. Log files that interest us the following:
Input parameters:
02-beats-input.conf - you can not touch

 input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt" ssl_key => "/etc/pki/tls/private/logstash-beats.key" } } 


Here we see that the service is ready to receive data on port 5044.
Output Parameters:
30-output.conf - you can not touch

 output { elasticsearch { hosts => ["localhost"] sniffing => true manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } stdout { codec => rubydebug } } 


The most interesting is data conversion. By default, the 11-nginx.conf file looks like this.

 filter { if [type] == "nginx-access" { grok { match => { "message" => "%{NGINXACCESS}" } } } } 


But maybe after playing with the NGINXACCESS template, you will want to process your logs exactly as you need it.
To do this, you will need to change the filter section. There may be several parameters - very well described here We collect, parse and give logs using Logstash .
From myself I want to add that the following service works well for debugging grok filters: Grok Debugger

3. Layout of 2 containers.

Here I strongly recommend that you use the Docker Compose service - to describe the parameters in one YAML format file, it is much more convenient than executing commands with parameters each time. And at the debugging stage, it will be necessary to start and stop more than once.
To do this, you need to create a folder with a project, for example myElk, and create a file with the following name docker-compose.yml and, for example, the following:

 version: '2' services: filebeat: build: . image: [ imageId  filebeat] volumes: - /path/to/myElk/log/nginx:/etc/logs/nginx #  nginx - /path/to/myElk/filebeat:/etc/filebeat - /path/to/myElk/filebeat/tmp:/tmp/filebeat - /path/to/myElk/filebeat/certs:/etc/pki/tls/certs links: - "elk" depends_on: - "elk" #entrypoint: ./time-to-start.sh elk: image: [ imageId  elk] ports: - "5601:5601" #kibana - "9200:9200" #elastic - "5044:5044" #logstash beats filebeat 


This configuration file describes two containers and their relationship, and t
Try running the container, if everything is OK, then at localhost: 5601 you will see the first Kibana page, where you will need to select the first index, it will be formed like filebeat- [date], type filebeat and if the data started to arrive, it will be generated automatically.

For those who run docker on mac, you will need to forward ports through VirtualBox port forwarding in addition to localhost: 5601 being available on the host machine.

Source: https://habr.com/ru/post/282866/


All Articles