📜 ⬆️ ⬇️

Problems of the current methodology for determining the actual threats from FSTEC



Good day, Habr! Today we would like to criticize the document “Methodology for determining actual threats to the security of personal data when processing them in personal data information systems”, approved by the FSTEC of Russia on February 14, 2008. (further - Methodology).

This Methodology is the only approved document for determining current security threats, and in accordance with current legislation, “To identify information security threats and develop an information security threat model, methodical documents developed and approved by the FSTEC of Russia ... ” are used .
')
As can be seen from the date of the approval of the Methodology, it is over 10 years old and there are really a lot of problems with it. Which ones - let's look further.



Problem number 1. Binding to personal data



This problem emerges already in the title of the document: “Methodology for determining actual threats to the security of personal data when they are processed in personal data information systems”.

Further in the text of the Technique we see the following:

The methodology is intended for use in conducting work to ensure the security of personal data when it is processed in the following automated information systems for personal data:

  • state or municipal ISPDN;
  • ISPDN, created and (or) operated by enterprises, organizations and institutions (hereinafter - organizations), regardless of the form of ownership, necessary to perform the functions of these organizations in accordance with their purpose;
  • ISPDn, created and used by individuals, except for the cases when the latter use these systems exclusively for personal and family needs.



Well, ok, bind to personal data and ISPDn, but what's the problem? And the problem arises when we need to write a threat model, for example, for a state information system (GIS), in which personal data are not processed.

Everything would be fine if every GIS operator would cook in its own sauce in terms of information security - would develop a threat model for itself using the method invented by itself, use it only by itself and would not show it to anyone. Only by the resolution of the Government of the Russian Federation of May 11, 2017 No. 555, the operators of all newly created GIS were obliged to coordinate threat models and technical tasks for creating an information protection system with the FSTEC of Russia and the FSB of Russia.

And, of course, in the case of an overly “creative” approach to the development of a threat model, the GIS operator will receive an answer that “The threat model is developed without taking into account the regulatory documents approved by the FSTEC of Russia, redo it.”

And we simply do not have another approved methodology.

Problem number 2. Controversial legitimacy



The first paragraph of the Methodology states:

The method for determining the actual threats to the security of personal data (PDN) when processing them in personal data information systems (ISPDn) was developed by the FSTEC of Russia on the basis of the Federal Law of July 27, 2006 No. 152- “On Personal Data” and “Provisions on Personal Data Security of data when processing them in personal data information systems ”, approved by the Government of the Russian Federation on November 17, 2007 No. 781 , taking into account the current FSTEC of Russia regulatory documents on the protection of information rmacii.


The problem here is that the government’s bold resolution was canceled in 2012. But if only this figure appeared in this paragraph, then the Methodology could be considered completely illegitimate. But there are still 152-FZ, which is quite a living and acting. Legal opinions on the legitimacy of the Methodology differ.

In any case, as already mentioned, this is the only at least somehow approved document, so we suffer and use it. Why are we “tormented”? Consider further.

(Semi) Problem number 3. Lack of communication with BDU FSTEC of Russia



While all current regulatory documents of the FSTEC require using the Threat Databank as source data for threat models, the Methodology refers to the document “Basic threat model of personal data security when processing it in personal data information systems”, which was also approved in 2008 and which in fact can not be used.

In general, this is not really a problem, we simply use the NDU and that's it. But at the same time, this situation vividly illustrates the inconsistencies and inconsistencies of regulatory documents. While 17, 21 and 239 orders of the FSTEC refer to at least somehow updated IDU, the Methodology was stuck in 2008.

Problem number 4. Initial Security Index



So, we got to the actual methods of determining the actual threats. Its essence is as follows: we have a list of threats, for each threat, to determine its relevance (or not relevance), we need to define a number of parameters, and then, through calculations / manipulations described in the Methodology, arrive at the desired goal - a list of actual threats.

The first of these parameters that we need to determine is “the level of initial security”, it’s also “the degree of initial security”, it’s also “the coefficient of initial security”, it’s Y1 (this, by the way, is another intermediate problem of the methodology - too many names for one the same essence).

The degree of initial security is determined as follows. There are 7 indicators (technical and operational characteristics of the system), for each indicator there are several options for values ​​and for each indicator you need to choose only one of these values, the most suitable for our information system. The selected value is compared to the level of security (high, medium or low).

Next, we consider how many options we have with the level of “high”, “medium” and “low”. If from 7 indicators 70% and more received a “high level of security”, then the degree of initial security of the entire system is high (Y1 = 0). If out of 7 indicators 70% and more received a high or medium level of security, then the degree of initial security of the entire system is average (Y1 = 5). If the previous two conditions are not met, then the degree of initial security of the entire system is low (Y1 = 10).

List of characteristics and their values
technical and operational characteristics of ipdnsecurity level
tallaveragelow
1. by location:
distributed area that spans multiple regions, territories, counties, or the state as a whole--+
urban area covering no more than one locality (city, town)--+
corporate distributed uprn covering many divisions of one organization-+-
local (campus) service, deployed within the same building-+-
local service deployed within the same building+--
2. by the presence of a connection to public communication networks:
service with multipoint access to public communication network--+
single-point connection to public communication network-+-
ispdn physically separated from the public network+--
3. for built-in (legal) operations with records of personal data bases:
reading, searching+--
record, delete, sort-+-
modification--+
4. on the delimitation of access to personal data:
ispdn, to which certain employees of the organization that owns the item or the subject of the pdn have access-+-
A service that all employees of the organization that owns the service have access to--+
open access service--+
5. by the presence of connections with other pdn databases of other ispdn:
Integrated service (organization) uses several pdn service bases, and the organization does not own all used pdn databases)--+
A service that uses one pdn database belonging to the organization that owns this service provider.+--
6. the level of generalization (depersonalization) pdn:
A field in which user-provided data is impersonal (at the level of organization, industry, region, region, etc.)+--
ispdn, in which the data is anonymized only when transferred to other organizations and not impersonal when presented to the user in the organization-+-
Ispdn, in which the data provided to the user are not impersonal (i.e., there is information available to identify the subject of the pdn)--+
7. by volume of pdn that are provided to third-party users of unallocated data without preprocessing
ispdn, providing the entire database with pdn--+
ispdn, providing part of pdn-+-
service not providing any information+--



It seems normal, but.

First, the indicators, their values ​​and levels of security are distributed in such a way that in a real information system (non-autonomous computer, disconnected from the network, both communication and electrical) you will never get a high degree of security .

Secondly, the indicators themselves and their values ​​are very strange. It often happens that one indicator fits either two values ​​at once, or none.

Example 1:



Indicator "By location".

Possible values:



There are not rare situations when two values ​​fit the information system at once: “distributed” and “corporate” or “urban” and “corporate”.

Example 2:



Indicator "On the delimitation of access to personal data"

Possible values:



There are two extremes, either the system is open, or it is accessed only by employees of the organization that owns this information system.

In the modern world, there are often situations when third-party users (who are not employees of the information system owner) are given access to protected information, and the system is not publicly accessible. These moments are perfectly reflected in orders 17 and 21 of the FSTEC (there are separate measures for connecting external users), but they are absent in the Methodology. At the same time, we cannot add our own values, the Methodology does not provide for this.

Thirdly, there are indicators that are tightly connected with personal data and are simply not applicable outside their context, for example, the indicator “By the level of generalization (depersonalization) of PD”. When we use the Methodology to develop a threat model for GIS that does not process PDNs, this indicator simply has to be thrown out.

And what is the cost of "ISPDn, not providing any information" ...

Problem number 5. Calculation of relevance for threats that do not have prerequisites



If there is Y1, there must be Y2. Y2 is differently the “probability of threat”. There are 4 gradations: unlikely, low probability, medium probability and high probability (Y2 = 0, 2, 5, 10, respectively).

The very likelihood of a threat depends on the presence of prerequisites for the realization of the threat and on the presence / absence / incompleteness of the measures taken to neutralize the threat.

A threat is deemed unlikely if there are no objective prerequisites for it to carry out the threat.

So, what's the problem then? And the problem is that instead of writing in the Methodology that unlikely threats are simply excluded from the list of actual threats, for them we simply have our own Y2 value. This means that for threats for which there are no prerequisites (for example, threats associated with virtualization environments in systems where virtualization is not used), we must calculate the coefficients and determine relevance / irrelevance. Well, is not it nonsense?

Nonsense, especially given that under certain circumstances of threat, for which there are no prerequisites, purely by the Methodology may suddenly become relevant. This is possible with a low level of initial security and / or with medium / high danger threats. But in any case, we have to spend time on the calculation. Therefore, it is good practice in this place not to apply the technique head-on, but to weed out improbable threats at the preliminary stage.

So far, the experience of coordination with the FSTEC of models of threats to GIS suggests that the regulator has no complaints about this approach.

(Semi) Problem number 6. Another pointless parameter



The first meaningless (in reality, not applicable) parameter was a high degree of initial security. Further, if you carefully read the Method, you can find it bro.

If you read up to this point, it’s interesting what we are doing with Y1 and Y2, then we use them to calculate Y (it’s also the possibility of realizing the threat) using the straightforward formula Y = (Y1 + Y2) / 20. Depending on the resulting value, the implementation may be low, medium, high, or very high. And the last gradation is meaningless.



Here is a table from the Methodology by which we determine the relevance of a threat by two parameters - the possibility of realizing the threat (it is Y) and the danger of the threat (about it below). The table shows that the high and very high possibility of realizing the threat is no different; all threats at these levels will be relevant, despite the significance of the danger of the threat.

What was the point of introducing an extra meaningless gradation is not clear. We, in general, are neither cold nor hot from this, so this can be considered as a problem only in part.

Problem number 7. Negative consequences (danger of threats)



Well, go to the "danger of danger." There are gradations here (this is a turn!) Low, medium and high. They differ in what consequences for the subject of personal data will lead to the realization of the threat: slight negative, just negative, significant negative, respectively.

You probably think that further in the Methodology is written in detail and with numbers - what are minor negative consequences and how do they differ from significant ones? No, the drafter of the document was limited to the phrase that the danger of threats is determined "based on a survey of experts (specialists in the field of information security)." I think it is no secret to anyone that in such cases, many threat model developers will simply always put a low risk of threats by default in order to reduce the list of actual threats. In addition, it is worth noting that often “experts” who can be interviewed are not within a radius of 200 km.

Actually the problems with the danger of threats do not end there. Additionally, developers of threat models for information systems in which personal data are not processed suffer. And if the concept of “personal data” can be easily replaced by “protected information”, then what should be replaced by “the subject of personal data” in the context of the negative consequences? Here, each threat model developer acts according to the situation.

And what about FSTEC?



A reasonable question - if the current methodology is so bad, then how is the regulator with plans to update the document?

Here is the story - in the distant 2015 FSTEC laid out a draft of a new methodology for modeling threats . For some time, FSTEC received from all interested parties suggestions and suggestions for improving the project. Then the first six months to the questions “Where is the new technique?” Followed by the answer that the regulator received a lot of feedback on the draft document and is now handling the whole thing. Then about a year later, the FSTEC representatives answered that the document was in the Ministry of Justice for approval (the draft document with amendments on the basis of feedback from the population was not laid out, according to the link above - the original version). And then they began to shrug.

In general, the fate of replacing the Methodology is both sad and foggy at the same time. It is sad because the project was not bad, certainly better than what you have to use now, although we also had our own questions and complaints there.

Conclusion



What is the result:



These are the everyday domestic ibe. All good.

Source: https://habr.com/ru/post/453756/


All Articles