📜 ⬆️ ⬇️

Secure Deployment of ElasticSearch Server

After the successful transition from MongoDB full-text search to ElasticSearch, we managed to launch several new services running on Elastic, an extension for the browser and, in general, I was extremely pleased with the migration.

But in a barrel of honey, there was one fly in the ointment - about a month after the configuration and successful work, LogEntries / NewRelic unanimously shouted that the search server was not responding. After login to Digital Ocean's low-board, I saw a letter from the support that the server was suspended due to large outgoing UDP traffic, which most likely indicated that the server was compromised.

DigitalOcean provided a link to instructions on what to do in this case. But the most interesting thing was in the comments, almost everyone who suffered from attacks in recent times had a deployed ElasticSeach cluster with an open 9200 port. The attackers took advantage of Java and ES vulnerabilities, got access to the server and turned it into an integral part of some bot-network.

I had to restore the server from scratch, but this time I will not be so naive, the server will be reliably protected. I will describe my setup using Node.js, Dokku / Docker , SSL.
')

Why is that?


Despite the power of ElasticSearch, it does not provide any internal means of protection and authorization, you need to do everything yourself. There is a good article on this topic.

Malefactors (most likely) use the vulnerability of dynamic elastic scripts, so if they are not used (as in my case) they are recommended to be disabled.

And finally, open port 9200 is like a bait, it needs to be closed.

What will be the plan?


My plan was to raise the “clean” Digital Ocean droplet, deploy the Elastic Search inside the Docker container (even if the instance is compromised, all you need to do is restart the container), close 9200/9300 to access it from the outside and send all traffic to the elastic through the Node.js proxy server, with a simple authorization model, through the “shared secret”.

Raise a new droplet


DigitalOcean provides a pre-prepared image with Dokku / Docker on board on Ubuntu 14, so it makes sense to select it immediately. As usual, lifting a new car takes a couple of tens of seconds and we are ready to go.

image

Deploy ElasticSearch in a container


The first thing we need is a Docker image with ElasticSearch. Despite the fact that there are several plug-ins for Dokku, I decided to go through self-installation, so it seemed to me to be easier with the configuration.

The image for Elastic is already ready and there are good instructions for its use.

$ docker pull docker pull dockerfile/elasticsearch 

Once the image is loaded, we need to prepare a volume that will be external to the running container (even if the container stops and is restarted, the data will be stored on the host file system).

 $ cd / $ mkdir elastic 

In this folder, we will create a configuration file, elasticsearch.yml. In my case, it is very simple, I have a cluster from one machine, so all the default settings satisfy me. But, as mentioned above, it is necessary to disable dynamic scripts.

 $ nano elasticsearch.yml 

Which will consist of only one line,

 script.disable_dynamic: true 

After that, you can start the server. I created a simple script, for configuration and debug time, you may need to restart several times,

 docker run --name elastic -d -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -v /elastic:/data dockerfile/elasticsearch /elasticsearch/bin/elasticsearch -Des.config=/data/elasticsearch.yml 

Pay attention to, -p 127.0.0.1:9200:9200 , here we “bind” using 9200 only from localhost . I spent a few hours trying to configure iptables and close 9200/9300 ports to no avail. Thanks to the help of @darkproger and @kkdoo, everything worked as it should.

-v /elastic:/data will map the volume of the /data container to the local /elastic .

Proxy Node.js server


Now you need to start the proxy server Node.js, which will serve traffic from / to localhost : 9200 to the outside world, safely. I made a small project based on http-proxy , called elastic-proxy , it is very simple and may well be reused in other projects.

 $ git clone https://github.com/likeastore/elastic-proxy $ cd elastic-proxy 

Server code itself

 var http = require('http'); var httpProxy = require('http-proxy'); var url = require('url'); var config = require('./config'); var logger = require('./source/utils/logger'); var port = process.env.PORT || 3010; var proxy = httpProxy.createProxyServer(); var parseAccessToken = function (req) { var request = url.parse(req.url, true).query; var referer = url.parse(req.headers.referer || '', true).query; return request.access_token || referer.access_token; }; var server = http.createServer(function (req, res) { var accessToken = parseAccessToken(req); logger.info('request: ' + req.url + ' accessToken: ' + accessToken + ' referer: ' + req.headers.referer); if (!accessToken || accessToken !== config.accessToken) { res.statusCode = 401; return res.end('Missing access_token query parameter'); } proxy.web(req, res, {target: config.target}); }); server.listen(port, function () { logger.info('Likeastore Elastic-Proxy started at: ' + port); }); 

It proxies all requests and “skips” only those that specify the access_token as a request parameter. access_token is configured on the server via the environment variable PROXY_ACCESS_TOKEN .

Since the server is already configured for Dokku, then all that remains to be done is to push the sources and Dokku will deploy the new service.

 $ git push master production 

After deployment, go to the server and configure the access token,

 $ dokku config proxy set PROXY_ACCESS_TOKEN="your_secret_value" 

I also wanted everything to go through SSL, with Dokku this is very easy to achieve, server.key server.crt and server.key to /home/dokku/proxy/tls .

Restart the proxy to apply the latest changes, make sure that everything is OK by clicking on the link https://search.likeastore.com - if everything is good, it will issue:

 Missing access_token query parameter 

We connect containers Proxy and ElasticSeach


We need to connect the two containers among themselves, the first with the Node.js proxy, the second with ElasticSearch itself. I really liked dokku-link plugin, which does just what you need. Install it

 $ cd /var/lib/dokku/plugins $ git clone https://github.com/rlaneve/dokku-link 

And after installation, we associate the proxy with the elastic,

 $ dokku link proxy elastic 

After that, the proxy will need to restart again. If everything is good, then follow the link proxy.yourserver.com?access_token=your_secret_value, proxy.yourserver.com?access_token=your_secret_value, we will see the response from ElasticSearch,

 { status: 200, name: "Tundra", version: { number: "1.2.1", build_hash: "6c95b759f9e7ef0f8e17f77d850da43ce8a4b364", build_timestamp: "2014-06-03T15:02:52Z", build_snapshot: false, lucene_version: "4.8" }, tagline: "You Know, for Search" } 

We adjust the client


It remains to configure the client so that it sends an access_token to all requests to the server. For the Node.js application, it looks like this,

 var client = elasticsearch.Client({ host: { protocol: 'https', host: 'search.likeastore.com', port: 443, query: { access_token: process.env.ELASTIC_ACCESS_TOKEN } }, requestTimeout: 5000 }); 

Now you can restart the application, make sure everything works as it should ... and exhale.

Afterword


This setup worked (and works now) for Likeastore perfectly well. However, over time, I saw some kind of overhead of this approach. Most likely, you can get rid of the proxy server, and configure nginx with basic-authorization , from upstream to the docker, also with SSL support.

Also, good ideas will probably keep Elastic in a private network, and all requests to it should be done through the application API. This may not be very convenient from a development point of view, but it is more reliable from a security point of view.

Shl. This is a retelling in Russian of my post from a personal blog .

Source: https://habr.com/ru/post/231917/


All Articles