One of the resources that I look after suddenly became suddenly popular with both good and bad users. Powerful, in general, iron has ceased to cope with the load. Software on the server is the most common - Linux, Nginx, PHP-FPM (+ APC), MySQL, the latest versions. The sites are spinning Drupal and phpBB. Optimization at the software level (memcached, indexes in the database where they were not enough) helped a little, but did not solve the problem fundamentally. And the problem is a large number of requests, to statics, dynamics and especially the base. Put the following limits in Nginx:
on connections
limit_conn_zone $binary_remote_addr zone=perip:10m; limit_conn perip 100;
and speed of requests for dynamics (fastcgi_pass on php-fpm)
limit_req_zone $binary_remote_addr zone=dynamic:10m rate=2r/s; limit_req zone=dynamic burst=10 nodelay;
It is much easier, the logs show that no one gets into the first zone, but the second one works out to the fullest.
But the bad guys continued to hammer, and I wanted to discard them earlier - at the level of the firewall, and for a long time.
')
First parsil itself logs, and especially stubborn added via iptables to the bath. Then parsil already on the crown every 5 minutes. I tried fail2ban. When I realized that there were a lot of bad guys, I transferred them to ipset ip hash.
Almost everything became good, but there are unpleasant moments:
- parsing / sorting logs is also decent (processor) time takes
- the server is tupit if a new wave has begun between adjacent showdowns (logs)
It was necessary to figure out how to quickly add violators to the blacklist. First there was the idea to write / add a module to Nginx + daemon, which will update the ipsets. It is possible without a demon, but then you have to run Nginx from the root, which is not beautiful. It's real to write, but I realized that there is not so much time. I did not find anything similar (maybe I looked badly?), And I came up with such an algorithm.
With a higher limit, Nginx throws out the 503rd error Service Temporarily Unavailable. So I decided to cling to it!
For each location we create our own page with an error
error_page 503 =429 @blacklist;
And the corresponding named location
location @blacklist { fastcgi_pass localhost:1234; fastcgi_param SCRIPT_FILENAME /data/web/cgi/blacklist.sh; include fastcgi_params; }
More interesting.
We need support for CGI scripts. Install, configure, run spawn-fcgi and fcgiwrap. I already had it ready for collectd.
CGI script itself
Actually everything is obvious, except, perhaps, SQLite. I added it while just for statistics, but in principle it can be used to remove outdated bad guys from the black list. Time 5 minutes is also not used.
The blacklist was created like this
ipset create web_black_list hash:ip
Rule in iptables everyone can have their own, depending on the configuration and imagination.
I saw one hoster service managed firewall. By replacing the ipset add script with a small curl session, you can filter the bad guys on the external firewall by unloading your channel and network interface.
ZY: Smiled the message of one "hacker" on the forum, how quickly he put the server. He had no idea what this server put on him.
Additions:
Thanks to comrade
megazubr for advice using the timeout parameter when creating a blacklist - there is no need to clean it by cron. Now the team to create it with a timeout of 5 minutes looks like this:
ipset create web_black_list hash:ip timeout 300
Thanks also to
alexkbs for pointing at the thought of security. On production servers, the fastcgi handler needs to be hung on a unix-socket with permissions only for nginx. In the config which we write:
error_page 503 =429 @blacklist; location @blacklist { fastcgi_pass unix:/var/run/blacklist-wrap.sock-1; fastcgi_param SCRIPT_FILENAME /data/web/cgi/blacklist.sh; include fastcgi_params; }
For spawn-fcgi.wrap:
FCGI_SOCKET=/var/run/blacklist-wrap.sock FCGI_PROGRAM=/usr/sbin/fcgiwrap FCGI_EXTRA_OPTIONS="-M 0700 -U nginx -G nginx"