📜 ⬆️ ⬇️

Lost art proof security. Part 1 of 2

Yuri Pashkov, Kuzma Pashkov - Lead InfoSec , EMC , VMWare trainer @ training.muk.ua

Years of teaching experience in the field of "Information Security" (hereinafter IB) allows us to state positive trends in this area:


')
At the same time, there are also negative ones:



As a result, whole teams of information security units appear, which at all levels of the hierarchy, starting with the security administrator and ending with the manager, mindlessly fulfill the requirements of security standards, without thinking about proving the security of the automated system after they are completed.

This article demonstrates the possibilities of an evidence-based approach for creating secure automated systems and is of an educational nature.

1. Technology work with data

Suppose that in some organization the following technology is used:
  1. In an organization, employees work in several different departments that solve interrelated, but different tasks;
  2. the solution of tasks of each department is automated; To solve the problems of a department, a common database is used, which is accessible to all department employees as part of teamwork. The content of the database is not a secret, but its availability is a critical parameter for the activities of the department and the organization as a whole. In other words, if the database is not available, the organization will suffer damage;
  3. when employees perform their duties, they use confidential information contained in documents. In other words, if the content of the documents becomes known to an unlimited circle of persons, the organization may suffer damage;
  4. documents are developed and printed by employees using automation tools at their automated workstations (hereinafter - AWS) connected to a local area network (hereinafter - LAN).


To create documents using multi-tasking operating system. The operating system (hereinafter referred to as OS) allows the user to launch several programs for execution. Document data is stored in files and processed by OS programs.

Teamwork is provided with the help of the employees' workstation, LAN network equipment, server and network operating system tools.

2. Security Policy
Consider a selective security policy that can be used in our case:
  1. documents and organization databases are valuable. A system should be implemented for distinguishing access to documents and databases on the basis of a set of organizational and technical measures and means of protection;
  2. closed automation system - only officials of the organization have access to documents and databases;
  3. management of the organization in the person of the system administrator has the right of access to all documents and databases;
  4. the head of the organization with the help of the administrator determines the rights for employee access to documents and databases;
  5. employees have the right to access the objects created by themselves, as well as being the object of collective use of the user group to which the employee belongs;
  6. access on behalf of a certain user to objects created by another user or belonging to collective use objects of a user group to which this user does not belong is prohibited.


Security policy and mechanisms to support its implementation form a single secure information processing environment. This environment has a hierarchical architecture, where the upper levels are represented by security policy requirements, followed by the user interface, then several software protection levels (including OS levels) and, finally, the lower level of this structure is represented by hardware protection tools. At all levels, except the top, the security policy requirements must be implemented, for which, in fact, the defense mechanisms are responsible.

In different systems, protection mechanisms can be implemented in different ways; their design is determined by the general concept of the system.

However, one requirement must be fulfilled rigorously: these mechanisms must adequately implement the requirements of the security policy.

For example, consider how this security policy can be implemented using the model of our automated system.

3. Model of an automated system

We construct a model of an automated system operating with valuable information. Valuable information is stored in the system in the form of information objects. In addition to valuable information, objects may contain other information, for example, program texts, system service information, etc.

Let time be discrete and take values ​​from N - sets of time values, N = (1,2, ...). Denote by t∈N the current time value.

It is considered that the state of the system at a given moment can be represented as states of a finite set of objects. Therefore, we will assume that the state of the system is a set of states of its objects. Objects can be created and destroyed, so you can talk about the set of objects of the system at some time t∈N, which we define as Ot, | Ot | <∞ - the set of objects of the system at time t. Lots of objects of course. By object we will mean an arbitrary finite set of words of a certain language .

For each t∈N, from the set Ot of objects of the system, we select a subset of St, - the set of subjects of the system at the time t. The set St consists of the subjects of the system S. By the subject we mean an object describing a transformation to which a domain is allocated (a domain means the system resources allocated for the transformation) and control is transferred. The transformation to which control is transferred is called a process. Thus, the subject is a couple (domain, process). Each subject can be in two states: in the form of a description, in which it is called non-activated, and in the form (domain, process), in which it is called activated.

Only another activated subject can activate the subject. On the set S, the subjects of the system for each moment of time t∈N we define the graph t, - the graph of the functioning of the system. The vertices S1 and S2 of the graph are connected by the arc S1 → S2 if and only if, in the case of the activation of S1, the activation of S2 is possible. If at any moment of time t∈N in the graph t, the arcs S do not enter the vertex S and do not extend the arcs, then we exclude such subjects from consideration. Denote by procedure for activating the subject S2 subject S1. By activation we mean the transfer of control of the subject S2 to the subject S1.

From the set of St subjects of the system, select the subset U - the set of users of the system. A user is understood to be such a subject S from a set of subjects, for which in all the graphs Gt, the vertex S does not include arcs. Users are activated by definition and can activate other subjects of the system. We introduce the concept of a user group, which will be denoted as G. A user group is a subset of the set of users U, which includes users who have equal rights to access group shared objects. In addition to activating the system, other types of access are possible. We define a set of R - a set of types of access. A lot of course, i.e. | R | <∞. Examples of access types are “read”, “write”, “execute”. For each subject S, you can define ρ - the set of accesses of the activated subject S to the object O, which is a subset of the set of types of access ρ⊆R.

Denote by S → O — access of subject S to object O. Direct access by subject S of the system to the object O of the system is not always possible.
Then we define S → * O - access from the name of the subject S to object O. This means that in a certain period of time [t, t + τ] a sequence of accesses is implemented



Since the functioning of the system is described by the accesses of the activated subjects of the system to the objects of the system, then for each point in time t∈N you can define

Dt = {O | S → * O at time t}
- the set of objects accessed at the time t∈N.

In this set, select
Dt (U) = {O | U → * O at time t}
- the set of objects accessed on behalf of the user U at the time t∈N, as well as
Dt (G) = {O | Ui → * O, Ui ∈ G, i = 1 (1) n at time t}
- the set of objects accessed by a group of users at the time t∈N.

From the set of objects that can be accessed, select a group of objects that are accessed by all users.
D = ⋂tDt (Ui), i = 1 (1) n

These are common system objects. In particular, the user can create objects from D and create objects and destroy objects that do not belong to D. For each group of users, one can also select a large number of objects.
Dg (G) = ⋂tDt (G)
- objects of collective use of group G.

We will assume that a certain subsystem is built from the objects of the system that implements access. We will assume that any appeal of a subject for access to an object in this subsystem begins with a query, which we will denote .

When an object is spawned, a subject activated on behalf of the user calls on the corresponding procedure, as a result of which an object with a unique name is created. We assume that the corresponding user spawned this object. Denote by Ot (U) - the set of objects generated by the user. Similarly, Ot (G) can be defined as a set of objects generated by a group of users.

Among all users, select the privileged user Uadm - the system administrator. This user has a full set of privileges to access common system objects. At any time t∈N this user is unique.

Now we will express in terms of the model a security policy defining the allowed and unauthorized access in the system.
Allowed will be considered access to objects created by the user, as well as belonging to the collective use of the user group to which this user belongs. Mathematically, this can be formulated as follows:



We will consider unauthorized access on behalf of a certain user to objects created by another user, or belonging to collective use objects of a user group to which this user does not belong.



4. Proof of the need for security conditions.

So, we have built a model of a system that processes valuable information, and formulated statements, the validity of which is generally controversial.

Now it is necessary to prove that under certain conditions these statements will be true, in other words, the security policy will be executed, the system model will be protected and information will be processed in the system safely.

First, we intuitively introduce a number of assumptions that ensure, in our opinion, the security of information processing. Further we will prove that the fulfillment of these assumptions will ensure the fulfillment of the security policy. We will move from assumptions to lower-level “services”, lay down the conditions under which the model of the system will be protected, and also prove this fact.

Thus, having proved the security of the model, we can say that the model of the system we built is protected. Following the evidence, it can be argued that if the model of the system is protected, then the system implemented in strict accordance with the conditions formulated in the simulation will also be protected.

Assumption 1. If subject S is activated at time t, then there is a single activated subject S 'in St, which activated S. At time t = 0, only users are activated.

Lemma 1. If at the moment t the subject S is activated, then there is a single user U, on whose behalf the subject S is activated, i.e. there is a chain



Proof . According to Assumption 1, there exists a unique subject Sk activating S. If Sk = Ui, Ui∈ {U}, then the lemma is proved.

If Sk ≠ Ui, then there is a single subject Sk-1> activating Sk. Due to the finite time of the system Σ and the fact that at the initial moment of time only users can be activated, we find one of them in the beginning of the chain. At this, according to the definition of users, the chain ends. The lemma is proved.

We will simulate the functioning of the system by a sequence of accesses of the activated subjects of the system to the objects of the system.

Assumption 2. The operation of the system Σ is described by a sequence of accesses of the sets of subjects to the sets of objects at each time point t∈N.

Lemma 2. For each t∈N, for each O∈Ot, O∉D, there is a single user U, such that O∈Ot (U), that is, it generated it.

Evidence. Since O0 = {Ui} ∪D, then the object O∈Ot, O∉D is generated at some moment s, 0≤s≤t. Then in Os there was an activated subject S, who created O. Then, as noted earlier, there is a single user U, who generated O. The lemma is proved.

One of the disadvantages of discretionary security policy is the free distribution of rights. Suppose that only one user in the system has all rights to access common objects of the system, thus being their owner, and he does not transfer these rights in full.

Assumption 3. In the system Σ, at each moment of time t∈N, there is a single user Uadm, who has all the rights with respect to the general objects of the system.
Suppose that it is impossible to store any valuable information in the shared objects.

Assumption 4 . If O∈D, then the access is of the form for any ρ1, ρ2⊆R can not create a channel of information leakage.
Suppose that the order of the system when trying to access a subject activated on behalf of a certain user to the system object is as follows.

Assumption 5 . If some subject is S, S∈D is activated on behalf of the user Ui (i.e. ), in turn, subject S is granted at time t access to object O, then either O∈D, or O∈Ot (Ui), or O∈Dg (G), Ui∈G, or the system crashes access.

Theorem 1. Suppose that assumptions 1–5 and rules of security policy are satisfied in the constructed system. Then, unauthorized access is not possible in the system.

Proof . Suppose the contrary, that is, there exists at some time point t such kind of access p, in which, on behalf of user Ui who is not in the group, access is made to object O of another user Uj or to common objects of another group:


at

Let S1, S2, ..., Sm be all activated subjects having accesses βi⊆ρ, i = 1,2, ..., m at time t to object O. Then, according to Lemma 2, the set of these subjects is divided into three subsets:
Common objects:


Subjects that are objects generated by a user Ui or by a group G:


Subjects that are objects that perform unauthorized access (by assumption):


According to Lemma 1, for any Sl, l = 1,2, ..., m, there is a single user on whose behalf the subject Sl is activated. If Sl∈A, then, according to Assumption 5 and the condition of Theorem 1, that access allowed, we get that Sl is activated on behalf of Uj or Ui∈G. This is contrary to the assumption.

If Sl∈B, then access not possible according to security policy.

So if , then there is a chain of length (k + 1):

and subject S (k) ∈C.
Then there is a chain of length k, such that

Repeating these arguments, after k steps we get that


Last access is not possible if the security policy is being enforced. Therefore, the assumption is false and Theorem 1 is proved.

Now we will build a set of lower-level “services” that are more convenient for implementation in an automated system that support the security policy, i.e. we want to define a set of conditions implemented in the system, such that it is possible to prove the theorem on the sufficiency of the fulfillment of the security policy rules.

Condition 1 (identification and authentication) . If for any then the membership functions S and O of the sets Ot (Ui), i = 1 (1) n, are calculated; D; Dg (Gk), k = 1 (1) g.

Condition 2 (permissive subsystem) . If a at time t, then from t = j it follows and from t ≠ j follows . Similarly, if at time t, then from U∈G follows and from U∉G follows .

Condition 3 (no workarounds) . For any t∈N, ρ⊆R, if the subject S, activated by the time t, received access at the time t , then at time t an access request occurred .

Theorem 2. If conditions 1–3 are fulfilled in the constructed system, then the security policy is fulfilled (sufficiency of conditions for the implementation of the policy).
Evidence. :
) ρ⊆R,



;
) ρ⊆R,



- t S .

). , 1 S Ot(Ui),i=1(1)n; D;Dg(Gk),k=1(1)g. Ui*S,O∈Ot(Uj) i=j, 2, . O∈Dg(G), . . , U∈G — , , 2, .

). , , i≠j, 2 . , , , 2.

, S t, 3. S , . .

2 , , 1-3 , .

, 1-5 1-3, .

, , , .

To be continued. We are waiting for questions on training and certification of CISSP / CISM / CISA / Security + at PashkovK@muk.com.ua

, ( — )
( )
18 — 22 2016 — MUK-S0101 SecurityPlus
1 — 5 2016 — MUK-S0102 CISA

Source: https://habr.com/ru/post/272789/


All Articles