📜 ⬆️ ⬇️

Automating log downloads from Kibana to Redmine

Typical use for Kibana - we look at logs, we see errors, we create tickets for them. We have quite a lot of logs, there is little space to store them. Therefore, simply inserting a link to a document from Elasticsearch / Kibana is not enough, especially for low-priority tasks: until we get to it, the index with the log can already be deleted. Accordingly, it is necessary to save the document to a file and attach it to the ticket.

If you do it once, then it is still all right, but creating ten tickets in a row will be stupidly lazy, so I decided to “quickly” (ha-ha) automate it.


Under the cut: an article for Friday, an experimental javascript feature, a couple of dirty hacks, a small regular checkbox, reverse proxy, a loss of security to convenience, crutches, and an obvious xkcd image.

I warn you : I am far from being an expert in web-technologies, and therefore, experts, most likely, my problems will seem very obvious, and solutions - stupid. But this is not a production decision, but simply a small script “for one’s own”. Everything happens in a trusted local network and therefore the script has many security problems.
')

Solution options


Immediately, you can come up with a lot of solutions to the problem. Firstly, you can push all the logs in RM at once (suddenly, there is even a logstash plugin for this), filtering / aggregating them beforehand - know yourself change the description and the artist. This is, of course, cool, but it will take a long time to debug / tweak and a lot of new routine work will appear - to give descriptions / delete unnecessary.

The second option is to make up some script that receives links to logs, downloads them, asks the user for additional parameters and creates a new ticket through the Redmine API. But this will need to cut the normal interface, and duplicate some of the functions of RM ...

You can pervert and make a clicker or with the help of selenium to prepare a ticket, so that you can add to the usual interface, but you can’t touch the mouse ... Yes, and editing may suddenly be needed.

Browser plugin? Stay alive, register and maintain it, and even do two browsers.

Redmine plugin? No, this API will have to be studied, and it will not be enough to go into the gut of RM ... A simple additional field.

As a result, we arrive at a bookmarklet (executing javascript from bookmarks) and / or a custom script (greasemonkey / tampermonkey, etc.) - with javascript it seems you can draw an interface and download logs via ajax request, and indeed almost everything please do with the page.

File upload


For now, the most obscure part is file uploading. Everything else seems to be easy to do ... The usual <input type="file"> is responsible for uploading files on the RM ticket creation page, when modified, the function addInputFiles(this) called.

In theory, you just need to change the list of files for this element and pull this method. There is only one small problem :



This is done in order to not send to the server /etc/passwd , /etc/shadow/ or photos of your cat from the desktop. In principle, it is reasonable, but it is necessary to get around this somehow. However, if you can not, but really want to, then you can use such a dirty hack, which is based on an experimental feature - Clipboard API .

 function createFileList(files){ const dt = new ClipboardEvent("").clipboardData || new DataTransfer(); for (let file of files) { dt.items.add(file); } return dt.files; } 

Those. here is simulated adding files from the clipboard, which we then get a list. By itself, a “file” from the text is created elementarily:

 function createFile(text, fileName){ let blob = new Blob([text], {type: 'text/plain'}); let file = new File([blob], fileName); return file; } 

User interface


Everything is as simple as an ax: we make an inscription in the right place, an input field and a download button. Since it is done “for one’s own” with the input format (and its validation), I didn’t bother too much - let there be a text field, one line - one log (the link and the name of the file being created separated by spaces).

For the bookmarklet, it was still useful to delete yourself by id.

Elementary things
 function removeSelf(){ let old = document.getElementById(ui_id); if (old != null) old.remove(); } function createUi(){ removeSelf(); let ui = document.createElement('p'); ui.id = ui_id; let label = document.createElement('label'); label.innerHTML = "Logs data:"; ui.appendChild(label); let textarea = document.createElement('textarea'); textarea.id = data_id; textarea.cols = 60; textarea.rows = 10; textarea.name = "issue[logs_data]"; ui.appendChild(textarea); let button = document.createElement('button'); button.type = "button"; button.onclick = addLogsData; button.innerHTML = "Add logs data"; ui.appendChild(button); let attributesBlock = document.querySelector("#attributes"); attributesBlock.parentNode.insertBefore(ui, attributesBlock); } 


Main job


Everything is also simple here: we divide the text from the input field into pairs “link” - “file name”, download everything from elastic, because Kibana will not give up the data so easily, fill it with RM, change the ticket description and that's it. Fortunately, jquery is already connected to RM and ajax requests are easily created.

Boring code, regular search here
 function addLogsData(){ let text = document.getElementById(data_id).value; let lines = text.split('\n'); let urlsAndNames = lines .filter(x => x.length > 2) .map(line => line.split(/\s+/, 2)); downloadUrlsToFiles(urlsAndNames); } const kibana_pattern = /http:\/\/([^:]*):\d+\/app\/kibana#\/doc\/[^\/]*\/([^\/]*)\/([^\/]*)\/?\?id=(.*?)(&.*)?$/; const es_pattern = 'http://$1:9200/$2/$3/$4'; function downloadUrlsToFiles(urlsAndNames){ let requests = urlsAndNames.map((splitted) => { let url = splitted[0].replace(kibana_pattern, es_pattern); return $.ajax({ url: url, dataType: 'json' }); }); $.when(...requests).done(function(...responses){ let files = responses.map((responseRaw, index) => { let response = responseRaw[0]; checkError(response); let fileName = urlsAndNames[index][1]; return createFile(JSON.stringify(response._source), fileName + '.json'); }); uploadFiles(files, urlsAndNames); }).fail((error) => { let errorString = JSON.stringify(error); alert(errorString); throw errorString; }); } function uploadFiles(files, urlsAndNames){ pseudoUpload(files); changeDescription(urlsAndNames); removeSelf(); } 


Great, everything is ready! Making a test run and ...



Security


For those who do not know, requesting http-data, while on an https resource, is very bad, because you can get left data through an MITM attack. Moreover, some Firefox, even if you allow it to do so, you will need to ask for permission from him every time - and there will never be a white list. This is all right and good from the user's point of view, but for the script on the knee, these are only sticks in the wheels.

Well, I don’t want to buy an X-Pack for Elasticsearch for the lousy script, so I’ll have to make a proxy https -> http. He reverse proxy. There are quite a few options here, from the monstrous squid to the pit script. The most suitable haproxy seemed to me - it is easy to set up / install, and it does not eat resources.

It is enough to generate a self-signed certificate (sorry, let's encrypt, but we are in the trust zone)

 openssl genrsa -out dummy.key 1024 openssl req -new -key dummy.key -out dummy.csr openssl x509 -req -days 3650 -in dummy.csr -signkey dummy.key -out dummy.crt cat dummy.crt dummy.key > dummy.pem 

and, in fact, set up haproxy:

 frontend https-in mode tcp bind *:9243 ssl crt /etc/ssl/localcerts/dummy.pem alpn http/1.1 http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;" default_backend nodes-http backend nodes-http server node1 localhost:9200 check 

Now on port 9243 there will be a transparent proxy to an elastic (accordingly, we change the port in the regular schedule and add https).

However, this will not satisfy our browser, which provides user security. This time, the problem is that you cannot request data from another domain if it has not resolved it. This is solved using the CORS mechanism . Well at least Elasticsearch can do it itself :

 http.cors.allow-headers: X-Requested-With, Content-Type, Content-Length http.cors.allow-origin: "/.*/" http.cors.enabled: true 

Userscript


Let me remind you that we still rubbed did this game in the format of a bookmarklet. In principle, nothing terrible, but someone even once again click laziness (for example, me). Therefore, we will do userscript. Here at the same time there is the problem of updating it (we have been doing something for centuries!). Therefore, we will use the update mechanism of userscripts for whom I am cheating, of course, with the next crutch:

 // ==UserScript== // @name KIBANA_LOGS // @grant none // @include https://<rm-address>/*issues* // ==/UserScript== (function(){document.body.appendChild(document.createElement('script')).src='https://<kibana-address>:4443/kibana_logs_rm.js';})(); 

But in paranoids bookmarklets will be updated. To distribute this garbage, we need an https server. Here I was frankly furious and took the first one (and even in python 2.7) * sprinkle ashes on my head *:

 import BaseHTTPServer, SimpleHTTPServer import ssl httpd = BaseHTTPServer.HTTPServer(('0.0.0.0', 4443), SimpleHTTPServer.SimpleHTTPRequestHandler) httpd.socket = ssl.wrap_socket(httpd.socket, certfile='/etc/ssl/localcerts/dummy.pem', server_side=True) httpd.serve_forever() 

Now users only need to create a user script / bookmarklet, add a certificate to exceptions and everything will work.

A couple of bugs


The essence of the first problem is the following: when it is necessary to process the results of several ajax requests at once, as many arguments as there are requests are passed to the function. But when the request is one, jquery “kindly” opens it in three arguments . Therefore, I had to write such a crutch:

 let responses; if (requests.length == 1){ responses = [arguments]; } else { responses = Array.from(arguments); } 

The second bug is related to the fact that when changing the tracker or changing the status of the request, Redmine saves all the entered data, requests a new interface (directly html with the built-in js), recreates the interface and refills the fields using the replaceIssueFormWith function. It sounds a bit crazy, but this is done to implement workflow (and there may potentially be different fields at different stages of the input field). Here, too, had to make a crutch ad-hoc solution:

 function installReplaceHook(){ let original = window.replaceIssueFormWith; window.replaceIssueFormWith = function(html){ let logs_data = document.getElementById(data_id).value; let ret = original(html); createUi(); document.getElementById(data_id).value = logs_data; return ret; }; } 

Those. just do a hook on the original function and do actions similar to it for your field.

Conclusion


The full version of the script can be viewed in my gist . Here is a picture that most should expect by the end of this article:



In general, automating things is fun and useful, and allows you to learn something new in another area. The users of the script are satisfied, the creation of tickets for the logs in the kibana is not so annoying now.

Source: https://habr.com/ru/post/354468/


All Articles