In 2016, I had a lot of tasks related to responding to information security incidents. I spent a total of about 300 hours on them, independently performing the necessary actions or consulting the affected party’s specialists. The material for this article served as my records made in the course of this work.
The subtitle of this section can be the following question: “How is a standard incident different from a catastrophe?”
Good or bad logging determines further work with the incident. Every time in practice, I make sure that audit logs are the foundation of a good security policy and an effective response to information breaches (information security).
Starting with the incident, I first try to understand whether you can rely on the logs of the injured party. The success of the entire campaign depends on the response received. Fortunately, in CI / CD / DevOps culture, the use of centralized logging and alerting systems is becoming the standard. I have already begun to rely on the fact that the company created in the last three years will provide me with a large amount of data suitable for analysis.
If for one reason or another you are postponing the modernization of your journaling system, I strongly recommend donating everything you can, but try to invest in high-quality journaling. We must strive to get as many logs as possible and store them in as few places as possible.
It is very important to keep logs of hosts , applications, authentication events and cloud infrastructure actions. This information will help in the investigation of incidents, the conduct of preventive work and will be useful to other teams, for example, to assess the availability of services.
When setting up logging, do not forget about user privacy, as well as the necessary and sufficient retention of data in logs . It is common practice to reduce these deadlines (but much depends on the specifics of the product being created).
Conclusion: well-designed, accessible, centralized and useful in terms of generating alarm messages, it is better to put the logs in priority over other information security projects. A new type of alert, if done correctly, should go into production within ten minutes.
Several incidents with which I worked last year ended up with the fact that the original method of penetration was never found.
This is a real nightmare for the specialists of the affected party, who must inform the management of the steps taken, in fact, not based on reliable information. Localization of the problem becomes incomplete and is carried out according to the principle “made everything possible”.
If the main reason is known, the plan for minimizing the consequences looks like this: "We clean this laptop, replace this server, block one account."
Otherwise: “We clean ALL laptops, replace ALL servers, block ALL accounts.”
Finding the root cause of the incident is an important milestone, defining the emotional atmosphere in which further work will take place. This reason can significantly affect the final result.
As long as the root cause of the incident is not found, tensions will increase in the team, which can lead to internal conflicts. I try not to allow such situations. I remember cases when literally one conversation on raised voices separated the team from general panic, mutual accusations and mass layoffs of their own accord.
Regardless of how large your company is, it’s very important to “assign roles” to team members and periodically work out the interaction during a crisis.
Conclusion: conduct regular staff exercises and penetration tests. Practice any vulnerability found as a full-fledged incident. Practice scenarios in which you do not control the situation, do not omniscient, do not have the necessary logs and do not understand what is happening. Do not miss the opportunity to train your wrestling skills in the stalls from the “lying on the shoulder blades” position, since in reality the defending side often begins its most important fights in this very position.
The phrase “Bring your own device” is often used to categorically describe the risks that employees literally “bring” to work. However, very often personally directed attacks do not fall under this definition.
Many people remember the recent history of the APT group, when hackers attacked personal mail and employee devices. If keys and passwords are stored there, or corporate devices are accessed from these devices, it no longer matters whether they are brought to the office or not.
It is incredibly difficult to assess the scale of the threat to the information security of the company, which is represented by the employees' home devices, because even after the incident, people are reluctant to share personal information, which complicates the investigation. The general trend is the use of identical passwords for home and corporate accounts, as well as the storage of their working credentials on personal devices, including those that are not used to access corporate resources.
It is also worth mentioning the incident, during the investigation of which there were serious reasons to believe that one of the engineers with the aim of remote debugging stored important credentials in the personal cloud infrastructure.
It is necessary to take into account the fact that testing user devices may be too unpresented and expensive, and this also proves the importance of collecting high-quality logs.
Conclusion: find ways to improve employee information security at home. Pay for the use of password managers, equipment for multifactor authentication, etc. Convince them that they should involve employees of the information security department if they have problems with ensuring their personal information security or have noticed some kind of suspicious activity, even non-work related. Teach employees to monitor the information security of their family members and, if necessary, help in this matter.
Integrating your platform with Bitcoin companies puts your information security at risk. For a more detailed study of the issue, you can read the Blockchain Graveyard and SendGrid's blog .
Conclusion: if you are highly dependent on technology partners, find a way to protect themselves. If you are a Bitcoin company, bring your paranoia to an extreme degree of exacerbation and take extraordinary measures to restrict access to partner systems.
Last year, many reports of hacks mentioned "highly skilled hackers" who were further criticized for trivial penetration.
Many attacks begin with phishing, purchased exploits, merged keys and some other obvious and relatively easily preventable ways of hacking.
In fact, the “sophisticated” part is usually not discussed in detail. It is too easy to point out the “amateurish” attack vector and ignore everything else.
But do not judge the opponent by his first step. He can show what “sophistication” is by developing an attack from a bridgehead conquered by simple methods.
For example, the initial attack vector may not be of particular interest, but the ways in which the attacker gained access or credentials, hacking a third-party platform, can say a lot about the motivation and skill of your opponent.
For example, would Lockheed call their methods of hacking in 2011 sophisticated , even considering that the attackers were well prepared, having stolen RSA SecureID data? If the attack began with the usual phishing, does this mean that the enemy is weak?
Conclusion: highly skilled hackers, starting the invasion, do not play muscles. Make no mistake: do not underestimate the initial clumsy hacking attempts. Perhaps the attacker just wants to achieve results with minimal effort. For the next ordinary phishing attempt may be followed by a new 0Day.
Managing sensitive data is an important factor in information security.
Last year, I was not involved in dealing with incidents in companies that use role-based access models and manage confidential data using a special repository.
This may mean the following: there are no such companies, there are few of them, or they do not face incidents worthy of engaging an incident response specialist.
In the course of work, I constantly witnessed cases when the keys were written in the source code, flowed to cloud logging platforms, were stored securely on personal devices of employees, or copied to gist and pastebin. All of the above errors were both a root cause of hacking and an aggravating factor (of course, if the attacker managed to get hold of this data).
Conclusion: look at the roles of AWS, do not drive in keys and passwords in the source code, do not give developers the details of access to combat systems and be ready to deploy them quickly and often.
In organizations that qualitatively inform users (especially the manual) about the danger of re-use of passwords, however, several incidents occurred. These informational messages, unless they were addressed personally to certain employees, expired when it came to personal accounts.
Raising awareness may delay the inevitable, but much more efficient is the introduction of an identity management system based on the identity provider, as well as the integration of Single Sign On technology into cloud solutions. I have not had to deal with incidents in which the MFA mechanism used as part of a corporate user identification solution would have been cracked.
If Single Sign On is not an option, using MFA wherever possible, helps minimize risk. Separately, I mention GitHub , because developers often store sensitive data in source code. In this case, as a temporary measure, you can apply forced multifactor authentication, until a suitable repository is found for keys, passwords, and other secret data.
Last year I came across quite a few tasks related to insiders. In all cases, their motives were well known - I have regularly come across them for several years now. 2016 is no exception.
We can single out a group of incidents that is connected with the activities of people who consider themselves to be part of the startup culture of Silicon Valley. They are very actively communicating with the press, trying to draw attention to their present or future company.
In particular, an insider might think like this: “If I salt some interesting stuff to journalists now, they may then write about my idea for a startup.”
Although this is a rather specific scenario, employees of high-tech companies love to give the press IP addresses and product information, receiving various benefits in return.
This problem has become sufficiently general and can already be called a trend. It is difficult to find effective protection from it, since such leaks are admitted by employees who do not have a high level of access. For this problem, it is difficult to find a solution of a general form that does not at the same time lead to the transformation of an organization into a completely closed Apple-like company. Most executive directors seek to remain open to their employees and are willing to take this risk.
The next group includes cases related to internal user support tools. As soon as you have a certain number of employees with access to the administrator’s tools , one of them must be nakakostit (alone or in agreement with someone else).
Almost all the organizations I helped with last year had a lagging area with huge technical indebtedness.
This leads me to believe that companies that consider such debts to be part of the engineering process are usually well organized and their risks are noticeably lower.
A startup can develop very quickly, cut corners, lead an aggressive competitive struggle and be prepared to take risks.
In the development and start-up phase of the project, start-up companies are very different in their approach to documenting cases in which they had to compromise and retrospective analysis of accumulated debt.
Then they return their debts.
It is extremely rare that I witnessed the company covering absolutely all technical debts. But organizations that at least know where and what they should never lag behind enough to pass the point of no return and find themselves in a position where they are almost impossible to defend.
Debts come from different directions: scale, development speed, site reliability, user experience nuances, manual work prior to automation, and, finally, security.
The problem of information security debts is that they do not declare themselves loudly. Other forms of arrears lead to errors, costs, user complaints, and the rage of engineers. Debts on information security leads only to the creation of vulnerabilities, and the size of these debts is very difficult to measure . This requires painstaking manual work or the use of advanced technologies.
An engineering company that skillfully manages its technical debts usually has fewer security problems.
I have rarely seen this in practice, but even just understanding the problem and the desire to move in the direction of its solution is an excellent sign. Google is one of the few companies that has structured its “debt due to errors” associated with the release of releases, and has prescribed appropriate policies. They were able to make the problem of "debt" measurable and solvable . Ollie Whitehouse of the NCC Group also talked about this .
Many organizations even do not suspect that some of their standard processes (retrospective analysis, analysis of completed incidents) help to avoid the accumulation of excessive technical debts.
Conclusion: before you take the next big step forward, make sure that your biggest debts are paid off.
Information security incidents can teach us a lot. It is very important to find ways to talk about them and, working with them, improve their skills.
About me: I do information security, I used to work on Facebook and Coinbase. Currently, I am an advisor and consultant to several startups. I specialize in incident response and the creation of information security teams, but I take on many other IT tasks.
Thanks to Collin Greene and Rob Witoff.
Source: https://habr.com/ru/post/319988/
All Articles