📜 ⬆️ ⬇️

SOC for beginners. 3 myths about automation and artificial intelligence in the Security Operations Center

Recently (and the thematic SOC-forum has not become an exception) it is increasingly possible to hear statements that people in SOC processes are secondary and technologies are able to replace most of them - “The Death of Tier-1 Analysts,” “Artificial Intelligence, Winner of the Most Intelligent Pianist / philologist / candidate of natural sciences ”,“ Automatic learning rules ”and that’s all. Our experience suggests that SkyNet is still far away from the power, and it’s not worthwhile to underestimate the role of a person even in basic SOC processes.

Today, I would like to understand where the truth is, and where frantic marketing, and at the same time to debunk several myths about the possibilities of artificial intelligence. Interested and willing to discuss - welcome under cat.



“Hackers won't attack easy!”


Myth 1: “Analysis of basic incidents is the last century, it is necessary to control extremely complex attacks. Only Kill Chain, only hardcore! ”

There is an opinion that one of the easiest ways to reduce the burden on SOC and “target” its actions is to stop doing basic or “atomic” IS incidents, and try to control only complex chains, developing attacks or, as it is very fashionable to say now, Kill Chain. “No one is interested in one-time virus infection or one attempt to exploit the vulnerability on the server on the perimeter, there are hundreds of such incidents, and it is not necessary to deal with them,” we occasionally hear from the community. And, it would seem, here it is a profit: fewer incidents, lower pressure on SOC specialists, you can recruit fewer people and minimize routine operations.
')
However, for some reason, some facts are forgotten:

1. To date, any way to detect the Kill Chain is ineffective, and an attacker can bypass it.

At each stage of the attack, the attacker has dozens and hundreds of opportunities for further development and consolidation in the infrastructure: scanning the network for a new point, increasing privileges on the host, obtaining passwords programmatically or by analyzing configuration files, using specialized malware, including hiding their behavior from remedies and logging.

The attacker's position is always preferable: he can think up the attack in advance and calculate the steps. The position of the defender is obviously weaker. In such circumstances, to ignore one error of the attacker, which allowed us to see the incident, means to underestimate the enemy and give him the opportunity to develop his attack and win.

2. Any unblocked and unexplored attack harbors a threat.

If there is an infected computer in our network, can we consider it safe? And more than that: what are the guarantees that we will see the next step of the attacker who captured the machine, that the virus functionality will not allow modifying or changing the host logging settings, or that the anti-virus software will not be removed and replaced with a dummy to communicate with the control center?

Similarly, if we see an attempt to exploit a signature on a server vulnerable to it, then without analyzing the internal logs and searching for traces of further actions of the attacker, we will never be sure that our perimeter has not yet been cracked.

3. Only in the world of the “spherical horse in vacuum” can a SOC be able to control any activity and event in the network.

Probably every beaver at least once in his life wanted to buy a SIEM / Sandbox / EDR for the whole company in order to see and analyze each event, feeling that he was either a mountain eagle or a powerful Sauron. But such desires are divided into very understandable terrestrial matters: there are not enough budgets to connect to monitoring the entire infrastructure, IT services resist the application of security policies in productive segments, the platform cannot “digest” such a number of events, and so on.

As a result, SOC work is more like not a single web covering each network segment in full, but a system of booby-traps placed in key places. At the same time, SOC broadcasts “anomalies” and strangeness in all other segments.
With such restrictions, it is worthwhile to carefully study the actual work area of ​​the magic Kill Chain: it may turn out that even if the attacker successfully implements the "killer chain", the information / system he needs in the control zone simply does not turn out, and "street magic" will not work by completely objective reasons.

As a result, with all the benefits of identifying a chain of attacks and dealing with related incidents, a key safety issue remains the provision of targeted environmental and infrastructure hygiene.

We believe that it is necessary to learn to work with each alert separately. Only after the analyst understands how to manually disassemble each specific activity can one attempt to impose a kilchein scheme on the alerts and put the atomic incidents together. In this case, they can be useful as additional triggers or filters.

Despite the fact that such an approach significantly increases the burden on SOC operators and engineers, otherwise it is rather difficult to rely on effective opposition to anyone, including complex attacks.

Tell me what the host and I will tell you how to be


Myth 2: “The expert system is automatically enriched with information about the host, environment and all other data required for making a decision and filtering false positive.”

Now a huge number of vendors, talking about the magic of automation, which is achieved using their products and technologies to automatically collect information on the host, use the terms Asset Management, Inventory, and so on. Let's look at the information they collect, and consider together whether it is enough for a correct analysis of incidents.

  1. Inventory / scan systems for vulnerabilities, etc. collect a crazy amount of technical information on a host:
    • Hardware characteristics (processors, memory, disks, etc.).
    • Information on the OS and all installed patches (and, if lucky, even with vulnerabilities that are relevant to them).
    • List of installed software with versioning.
    • List of processes running at the time of scanning.
    • Lots of additional information (MD5 libraries, etc.).
  2. If this picture is superimposed on systems for analyzing the configuration and / or monitoring of network equipment, this leads to a seemingly integral and universal picture:
    • We understand from which segments the host is available.
    • We understand the boundary of his vulnerability and the possibility of attackers in his cracking.

Thus, we can see from where it can and from where the attack on a particular machine cannot be realized. Is this helpful? Of course.

But what we still clearly lack is information about what this host means in our company from the point of view of information security. Is it worth worrying about the brute force attempt of this machine, the Remote Admin Tool may or may not be running on it, and at what pace we need to “run” if a problem occurs on this host.

Some of this data can be taken from the CMDB (if it exists), unloading from the virtualization platform (usually the most important thing in the comments), but the key information - what kind of machine it is and what it means for information security - is nowhere to find out until we figure out this asset, after talking with its owner and company security officer.

Compare two events:

"The RAT utility AmmyAdmin was launched on host 172.16.13.2 located in the Moscow central office, finance department"
and
“RAT utility AmmyAdmin was launched on host 172.16.13.2, machine Ivanov Peter Mikhailovich, deputy head of communication center, functional - handling of flights of the International Center of Information Systems and working with the AWS KBR, remote work of the administrator is prohibited”

In our practice, we use several metrics to assess the “IB-significance” of the host: the possibility of using it to monetize the attack (bitcoin mining does not count, talking about the compromise of payments :)), the criticality of its information for changing / compromising, and several others. At the exit, this leads us to an integral weighted assessment of the criticality of a host depending on the type of attack. And the same assessment determines the “pace of the race” in each specific situation and for each specific incident. But the path to this knowledge is through a detailed audit of both the infrastructure and the organizational structure.

“We automate violation of business processes”


Myth 3: “Automated Incident Response not only detects the detected anomaly and incident, but also automatically signals the security tools to block it, so no round-the-clock monitoring of the system is needed.”

And one more approach to “street” magic that one often encounters in the community is the ability to automatically block / apply policies on remedies in blocking mode. The approach itself is not new, and the security personnel of the old formation may remember the first steps in this direction - surely many tried to automatically include new outgoing signatures on IPS, and immediately in blocking mode.

Even if we leave out of brackets the complexity of making a final decision without a context inaccessible to an automated system, there remains a very high risk of an automation or vendor error, which can have a very negative effect not only on a specific system, but on all security features or even business processes.

Here are some examples from practice:


I think that a dozen or two similar stories can be found in each bezopasnik.

From this follows a very significant, in our opinion, principle: information security should not affect business processes and hinder business development. And even more so should not do it spontaneously and unpredictable for others and especially for themselves. Therefore, the approach to the tasks of automatic response is possible only on a very narrow area of ​​cases and tasks and cannot replace our own center of competence or eliminate the need for a quick manual response to incidents by IS forces (we still have to sleep less every year).

On this, the authors of the “SOC for ...” articles wish you that the Year of the Yellow Earthen Dog was successful, and the cyberspace calm.



Tell us if you believe in these theses or, if you want to discuss the topic and argue with our arguments, write in the comments:

Source: https://habr.com/ru/post/345648/


All Articles