📜 ⬆️ ⬇️

A couple of useful commands that can be useful with DDoS and not only

In my case, as a frontend server, there is nginx and the access-log format is:

log_format main '$ remote_addr - $ remote_user [$ time_local] "$ host" "$ request"'
'$ status $ body_bytes_sent "$ http_referer"'
'"$ http_user_agent" "$ http_x_forwarded_for" -> $ upstream_response_time';

What the output gives something like this line:
')
188.142.8.61 - - [14 / Sep / 2014: 22: 51: 03 +0400] " www.mysite.ru " "GET / HTTP / 1.1" 200 519 " 6wwro6rq35muk.ru " "Mozilla / 4.0 (compatible; MSIE 8.0 ; Windows NT 5.1; WOW64; Trident / 4.0; SLCC2; .NET CLR 2.0.191602; .NET CLR 3.5.191602; .NET CLR 3.0.191602 "" - "-> 0.003

1. tail -f /var/log/nginx/nginx.access.log | cut -d '' -f 1 | logtop

Allows you to get the big picture: the distribution of unique IP, from which there are requests, the number of requests from a single IP, etc.
The most valuable thing is that all this works in real time and you can monitor the situation by making any changes to the configuration (for example, simply ban the TOP 20 most active IPs via iptables or temporarily limit the geography of requests in nginx via GeoIP http://nginx.org /en/docs/http/ngx_http_geoip_module.html ).

Shows (and will be updated in real time) something like:

3199 elements in 27 seconds (118.48 elements / s)
1,337 12.48 / s 95.65.66.183
2 308 11.41 / s 122.29.177.10
3 304 11.26 / s 122.18.251.54
4,284 10.52 / s 92.98.80.164
5,275 10.19 / s 188.239.14.134
6,275 10.19 / s 201.87.32.17
7,270 10.00 / s 112.185.132.118
8,230 8.52 / s 200.77.195.44
9 182 6.74 / s 177.35.100.49
10 172 6.37 / s 177.34.181.245

Where in this case the columns mean:



Above shows summary statistics for all requests.

In this case, we see that 12.48 requests per second are coming from IP 95.65.66.183, and 337 requests have been made in the last 27 seconds. The rest of the lines are similar.

We will analyze in parts:
tail -f /var/log/nginx/nginx.access.log - in continuous mode, read the end of the log file

cut -d '' -f 1 - divide the string into “substrings” by the delimiter specified in the -d flag. (in this example, a space is indicated).
Flag -f 1 - show only the field with the sequence number "1" (in this case, this field will contain the IP from which the request is coming)

logtop - counts the number of identical lines (in this case, IP), sorts them in descending order and lists them, adding statistics along the way (in Debian, put aptitude from the standard repository).

2. grep "& key =" /var/log/nginx/nginx.access.log | cut -d '' -f 1 | sort | uniq -c | sort -n | tail -n 30 - shows the distribution of any line by IP in the log.

In my case, I needed to collect statistics on how often one IP uses the & key = parameter in the request.

It will show something like this:

31 66.249.69.246
47 66.249.69.15
51 66.249.69.46
53 66.249.69.30
803 66.249.64.33
822 66.249.64.25
912 66.249.64.29
1856 66.249.64.90
1867 66.249.64.82
1878 66.249.64.86



In this case, we see that 1878 requests came together from IP 66.249.64.86 (and then, if we look at Whois, we’ll see that this IP belongs to Google and is not “malicious”)

We will analyze in parts:

grep "& key =" /var/log/nginx/nginx.access.log - find all the lines in the log containing the substring "& key =" (no matter what part of the line)
cut -d '' -f 1 - (see previous example), display IP
sort - sort the lines (needed for the next command to work correctly)
uniq -c - show unique lines + calculate the number of occurrences of these lines (the -c flag)
sort -n - sort using numeric sort mode (-n flag)
tail -n 30 - print 30 lines with the most occurrences (flag -n 30, you can specify an arbitrary number of lines)

All the requests above are for Debian or Ubuntu, but I think the commands in other Linux distributions will have a similar look.

Source: https://habr.com/ru/post/236771/


All Articles