📜 ⬆️ ⬇️

Successful implementation of SIEM. Part 2

I continue my article on the implementation and use of SIEM ( part 1 ). As I promised, in the second part I will talk about the correlation and visualization of events. I will talk about the main misconceptions of the leadership in terms of understanding what a correlation is, discussing what is really important, and what is not, give examples of effective and ineffective correlation of events.

The correlation within the framework of SIEM is a comparison of information from different events for the purpose of subsequent reaction. Response methods - create a new event, send a notification to the administrator by e-mail or to the console, execute a script, start a case inside the SIEM, write information to a sheet (list).

Visualization - display of information from different events in the form of graphs, charts, lists in real time or for a certain period of time.

In my work, I encountered a number of misconceptions from colleagues and management in the understanding that there is a correlation. The first is that correlation is the comparison of events from different sources of events. This is absolutely wrong, because we can have events of different types from one source, which also need to be compared, as an example of an event from a firewall, which is also a VPN gateway and an intrusion prevention system or events from web servers with different methods. The second big misconception is that the correlation is carried out on the fly, i.e. the time of the event is small. The third misconception - correlation is needed only to detect incidents. The fourth misconception is that everything can and should be answered with a letter.
')
The main misconception about visualization is that it is needed only to show beautiful pictures to the management.

So, what is really needed correlation, what are the basic principles in it? The main thing is the enrichment of interesting events with additional information. As an example, we have bare IP addresses of the sources that the DHCP server distributed to our clients and we see the addresses from the address on the firewall to the botnet servers, but there is no information about the username, it’s time and boring to go to the DHCP server, I want to know right away Username.

To do this, we take the logs from the workstation in order to understand to which user or machine name the IP address is assigned, which was caught trying to connect to the botnet and already in the correlated event we see full information about who did it. This is an example of an effective correlation.

An example of ineffective correlation is primarily the correlation of events that will often work and will not carry any useful information, for example, events about blocking / detection of attacks on IPS, together with the event of allowing a rule to fire on the firewall. This rule will be ineffective due to the fact that there will be a huge amount of spam, while, as a rule, IDS / IPS do not differ in the accuracy of their signatures, which means they give a large number of false positives. The main criterion for ineffective correlation is spam non-informative events (notifications).

Another big headache in working with SIEM is to determine what is important and what is not. Often this choice will be purely individual, but I will allow myself to highlight the main points. As we remember, the main threats to information security are violation of integrity, availability and confidentiality. At the same time, if the violation of confidentiality for the majority carries reputational risks beats in the long run to loss of income, the violation of integrity and accessibility hits here and now.

Based on this logic, it is important to quickly respond to the following events: DDoS attacks that we can monitor by analyzing firewall events, routers, switches, netflow, collecting equipment status events from monitoring IT systems (zabbix, nagios, etc.), infecting hosts viruses, brute-force attacks from the Internet to equipment on the perimeter of the network, server failures (stopping, starting services, changing user rights, potentially dangerous admin commands), it is unclear why the ports that have opened (events from scanners), using unprotected-interaction protocols (monitored by Ports tftp, telnet, etc. events on the firewall) to stop sending and blocked logs.

It is also very effective to actively use third-party scripts that will notify users of certain events that their account has been blocked due to violating the information security policy regarding the VPN, etc., i.e. routine tasks that are often done manually and where the cost of a mistake is not very high.

What is effective for visualization? Visualization is a very effective tool primarily for analyzing a large number of logs of the same type, in which statistics should be observed. I will give examples. A good visualization case is the integration of IDS / IPS, ITU, netflow operations with GoogleMaps, namely, visualization of where and where (if the infrastructure is distributed), we have the most requests, signature operations, traffic, and you can always configure it to with more requests the picture changed. For example, a small circle is from 1 to 100 requests per hour, an average of up to 1000, and so on ...

Somehow it was not possible to write a lot about the principles of good correlation without reference to internal processes, so part three will be together with the second.

So meet part three.

Basic processes that can be simplified and solved with the help of SIEM.

1. Inventory and vulnerability management
I believe that this is one of the key processes in the construction of information security systems of any organization, so to say - this is step 1.
How it is implemented - sending scan results to SIEM, compiling NAT translations from ITU logs or netflow logs, uploading information from various directories (AD, Sharepoint, etc.) and maintaining a list of assets with categorization. Scanning can be done by handwritten scripts, vulnerability scanner and network scanners.

Benefits - all the necessary information in one place and it is convenient to work with it, build reports, visualize and compare.

You can add an additional script here or write a rule that will allow integration with the incident management system for the subsequent closure of the vulnerability or security problems by administrators.

2. Network perimeter control
Visualization of the operation of the rules of the ITU, IDS / IPS, DNS, control of calls to C & C servers, etc ... These cases should be monitored by an IS analyst during the day and there should be a reaction and a proactive analysis of the incident.

For such cases, it’s a bad idea to customize trigger notifications. For them, the most effective everyday analysis in real time.

At the output from the analysis of such cases, we increase the effectiveness of all remedies through the development of recommendations during the analysis of logs. We can understand what the remedy is reacting to and whether we need it, make a list of ITU rules that most often work, and which did not work at all and optimize the work of the ITU as a whole. We can find infected hosts that are not caught by the antivirus.

3. Compliance
With the help of SIEM, non-compliance with security standards for operating system settings, network equipment, VPN access, i.e. any config that can be parsed and sent for analysis. It can be said that in general, any modern scanner can do this, but there will be a little wickedness, they often have too much functionality and do not do well with this function and more efficiently use a handwritten script that will pull out the necessary settings and send it to the SIEM for analysis. Further in the SIEM, you can visualize and notify about which servers, groups of servers do not correspond to certain checks.

4. Protection against attacks from the Internet.
The most topical and requiring a long enough analysis to understand that this is it - these are DDoS attacks, and they tend to seriously damage the operation of the system. Analysis of the ITU logs, web servers, DNS, netflow will allow you to see a sharp change in the number of source addresses and the type of traffic they send, which can signal the start of a DDoS attack, which will shorten the response time to it.

5. Control of sending logs.
This is one of the most important things to implement within the framework of SIEM. Effectively keep a list of sources that send the last 2 hours and notify about the expiration of the entry in the list. It is also effective to see which logs did not go through the ITU in their logs.

Now let's talk a little about frames.

As practice shows, in terms of working with SIEM there are two major areas - the first is operation and development, and the second is direct monitoring and console operations, i.e. processing

Different people should be engaged in these directions, combination is impossible.

What does exploitation and development entail? First of all, it is maintaining the work of the SIEM itself, communication with technical support, etc ... The second important task is the creation and development of existing tools for collecting logs and correlation rules, testing and putting them into operation, writing documentation on how to work with these rules and scenarios .

The processing part includes setting up for monitoring new servers, responding to events, analyzing logs through visualization tools, setting up new visualization tools and forming requests for writing new correlation rules.

It is important to understand that the combination of these two roles will lead to a loss of efficiency of the entire installation. Too different roles that require different personal qualities from an employee that are incompatible in one person.

Instead of concluding, I would like to say that the introduction of SIEM is advisable in large companies that have a sufficient budget for maintaining a staff of high-level specialists who are the key element here, as well as the availability of funds for the purchase of expensive exhaust software, which will be noticeable in a few years. The notorious correlation of events, which is always so much advertised, is also more relevant for large companies with a large number of servers and network equipment. Most small companies will have to do with the usual log management, which can be wonderfully implemented using open-source solutions and WebUI protection tools, as well as reports that can be generated by various scanners.

Source: https://habr.com/ru/post/271999/


All Articles