📜 ⬆️ ⬇️

Was Gartner right in predicting a change in approach to providing information security?



In 2013, analyst Neil MacDonald of the reputable consulting company Gartner predicted that traditional information security strategies would become obsolete by 2020. Do predictions come true?

I recently stumbled upon an article by analyst Neil MacDonald of Gartner titled "Prevention Is Futile in 2020: Protect Information Via Pervasive Monitoring and Collective Intelligence" . It was published in 2013, but in 2016 it was updated. In it, the author discusses how the approaches to ensuring the information security of enterprises will change in the coming years for the period from 2013 to 2020.
I wonder if Gartner was right in his predictions then?
')
Here are some points in the article that I paid attention to:

1. Separately, enterprises will not be able to protect themselves without the collective exchange of data about threats and intruders

Indeed, a huge number of new threats and their variants, as well as a huge amount of data that need to be monitored, correlated and verified, make cloud solutions with collective intelligence on the big data platform more and more relevant.

2. By 2020, 60% of corporate IS budgets will be allocated to solutions with instant attack detection and response technologies.

It is difficult to say whether we will reach the figure of 60%, especially given the difficult economic situation. But still the trend, in my opinion, can be traced. For example, in Panda Security, we see that in recent years, enterprises have been spending their budgets more and more specifically on such technologies (in particular, EDR), because the risks of being the victim of an unknown threat or a targeted attack have recently increased many times, regardless of the size of the enterprise. And the damage is quite obvious - this is a violation of the confidentiality and integrity of corporate information.

3. By 2018, 80% of endpoint protection solutions will include user activity monitoring and expert information, as opposed to less than 5% in 2013

Yes, this trend is indeed observed, because to ensure the security and confidentiality of corporate information, an ever deeper analysis of all IT processes taking place with additional expert information is required. Demand creates supply, and therefore there are more and more solutions on the market that offer expert information on detections, as well as in-depth monitoring and analysis of all processes in the network.

4. Part of the answer to the problem of detecting attacks without signature-based mechanisms lies in ubiquitous monitoring to detect significant deviations from normal behavior, which allows identifying malicious intentions. Therefore, assuming that systems are compromised by improved targeted threats, it is necessary to concentrate information security efforts on detailed, comprehensive and contextual monitoring in order to detect these threats.

It is this deep, continuous and contextual monitoring that takes into account the causal relationship of each process, which allows to more accurately identify malicious behavior. In addition, such technologies can detect attacks that do not use any malware, or fileless attacks, which has recently become more and more relevant.

The effectiveness of this approach is due to the fact that, unlike traditional behavioral analyzers, when the system analyzes the “normality” of behavior at one time, context monitoring analyzes the entire history of the process (in which this moment is just one of many subprocesses). This allows you to more correctly assess how this process arose on the machine, where it came from, what processes it spawned, what it sent to, what it addressed to, what its supposed goals are (we connect artificial intelligence).



5. Detailed monitoring of all processes running in the user system, as well as their interaction with content, executable files and corporate systems, will allow enterprises to get full visibility of everything that is happening. Something like a DVR. Therefore, in the event of an incident, you can check this data and understand against which users it was sent, which systems could have been compromised, and which information could be damaged.

This is also clearly noticed. It is clear that it is unlikely that the administrator will continuously sit and analyze a huge array of data coming from all the machines in thousands of sections. It is unreal. Although modern systems allow you to configure certain triggers so that the administrator is promptly notified of any deviations. But in general, it all works (should work) automatically. But when exactly this information is useful: when there were suspicions about certain processes, files, employee behavior (triggers triggered!) Or an incident nevertheless occurred. In this case, on-line access to such information with the possibility of in-depth and quick analysis will allow you to clearly trace the entire life cycle of detection and the dynamics of attack or suspicious behavior patterns: what, where, when, where, how, where, etc. As a result, it will be possible to localize the source of the attack, identify victims and perpetrators, assess the extent of damage, and most importantly, based on the analysis of these data, eliminate the identified weak points and work out a more effective model of behavior in such emergency situations.

In addition, the author of the article says that in the era of BYOD and the use of cloud services, IT departments of enterprises lose control over user devices, which limits their total control capabilities. Therefore, according to Neil Macdonald, there will be a transition to the protection of information, and not the systems themselves.

As one of the recommendations, the analyst Gartner proposes to introduce monitoring systems for corporate end devices. In his opinion, ideally, such systems should be an integral part of end-device protection solutions (EPP) so that there is no need to purchase an additional third-party solution.

Perhaps, in Russia, all of the above trends are still a little weaker than in Europe or the United States, which can be explained by a whole set of standard objective reasons, although, in my opinion, our dynamics are about the same.

Honestly, this article seemed interesting to me also in the sense that our approach implemented in the Panda Adaptive Defense 360 solution was in line with the forecasts of the author of this article. I remember how in 2013 we just tested the Adaptive Defense solution family prototype, running in a completely new security model. Now, a few years later, I see that the Gartner predictions come true, and the path we have chosen, I hope, turned out to be true.

PS In conclusion, I could not resist, so as not to invite you to watch a video review where an attempt is being made to infect a computer with a set of various fresh options for encryptors (WannaCry 2.0, Cerber, Spora, Razy, Goldeneye), as well as other malware. On this computer, Adaptive Defense 360 ​​is installed with antivirus disabled (only the enhanced protection module against unknown threats is activated) and a firewall, and the firewall is disabled in Windows and Windows Defender:

Source: https://habr.com/ru/post/329978/


All Articles