📜 ⬆️ ⬇️

What is safety?

It is possible to write about what safety is akin to explaining the multiplication table to adults. Indeed, a lot has been written about security lately, and it seems that even housewives already understand this well. But, in my experience, there are still many myths and delusions in this area.


One of these myths is the notion of security, as a function of the program.

Example No. 1. A serious software development company, and in an area where security requirements are very high. Employees say: "Security is a competitive advantage." But when you start talking with them about this in more detail, they list the functions that they implemented in the products, something like “all traffic is encrypted”, “we do authentication using tokens”. They do nothing for the safety of their products. At the same time, they are very, very intelligent people, even by the standards of programmers.
')
Example No. 2. A serious company specializing in information security, engaged in, among other things, the development of security requirements for information systems and software products. Head of department. We talked. In my words "safe program" asks: "program with security features?".

Example number 3. Big serious bank. Security requirements for purchased software. Requirements are formulated solely as a description of the required functionality. Moreover, it is clear that they are good specialists, that they feel - but do not understand - the limitations of this approach, that it is necessary to go beyond these limitations. Here and there supplements slip up, attempts to formulate something else. But they remain within the framework of the functional approach, and this limits them.

Is security not functionality?


Let's start with a fairly traditional definition of security (I assume here that safety and security are synonymous), which Michael Howard and David Leblanc give in his book [1]:

Secure software is a program that ensures the confidentiality, integrity and availability of customer information, as well as the integrity and availability of computing resources managed by the system owner or system administrator.


The definition of almost from the textbook, such or very close can be found in other books. And it is correct, I think, no one will dispute its correctness. The point is in the interpretation. Such a definition can easily be understood quite traditionally, “functionally”: in order for the program to be protected it is necessary and sufficient to integrate mechanisms in it that ensure confidentiality, integrity and availability. And, as the examples given at the beginning of the article show, it is often the way such definitions are interpreted.

Note, however, that there is no indication of security functionality in such a definition. At the same time, many experts very clearly state that security is a non-functional property of software. So John Viega and Harry McGrath in their book [2] write:

Is it a feature that can be added to an exiting system? No matter what the environment code is placed in? The answer to these questions is emphatic no.

Bolting is a bad idea. Security at any time. Security is like safety, dependability, reliability, or any other. Each 'ability is a systemwide planning and careful design. Security system behavior in particular environment.


So security is not a function of the software; it is its property. And the nature of this property is quite interesting.

What is so special about security


Let's take a closer look at the definition of protected software, which I cited at the beginning of the previous section. Expanding the concepts on which it relies, we will see that security determines not what the software should do, but what the software should not do.

Indeed, confidentiality means that information will not be available to those to whom it was not intended. Integrity means that the information will not be modified by those to whom it is not allowed, and those to whom it will not be able to modify it in an unresolved way. Finally, availability, as part of protecting information, means that system resources will not be used by outsiders, and that outsiders cannot control the system. That is, they are all defined not in terms of “what the system should do,” but in terms of what the system should not do.

One can, of course, argue that the “security functions” are designed precisely for this. They provide protection, do not give "not order" to access the system.

Yes. And no.

Suppose we want to prevent the modification of information recorded on the hard disk, thus ensuring its integrity. If no one should be able to write to the disk, let's just cut the track responsible for transferring the write command on the disk controller. The disc will not record at all. We have achieved our goal: no one has the ability to write to disk and the integrity of information will be preserved. We removed the function - "security" has increased. Many of the readers themselves can give a lot of similar examples.

On the other hand, if all users had absolutely identical rights with respect to all objects in the system, again, the security functions could not be done.

But more often we need some users to have an opportunity to have an operation, while others could not. This is where the “security features” appear. Their task is to take away from "not those" users the opportunity to use some useful features of the program. In a sense, security functions are such anti-functions.

That is, in a certain situation, security functions are a necessary means of ensuring the security of a program. But their presence is not sufficient. In addition, there is a function that takes away the ability to perform an operation using some mechanism available in the program, it should be ensured that it is impossible to achieve the same goals using any other mechanisms available in the program.

A very good analogy here is the building, in which it is necessary to provide access for some people and to prohibit access to others. The doorway in this case is the means for penetration inside. A door with a lock is a mechanism that allows anyone who has the corresponding key to get inside and prohibits everyone else from entering. But apart from the mere presence of this mechanism, it should be ensured that it is impossible to get inside the room through unclosed windows, through the roof, the opportunity to pick up the key, break the door, make a wall break, and much, much more should be excluded. The security of the building depends on the quality of the door with a lock, and on the quality of the design and implementation of the entire building.

All the same considerations apply in software development. To ensure its security, it is necessary to create a security system as a property of the program as a whole.

How does this affect the development process


Until now, everything that was written was like exercises in philosophy. Functions - anti-functions, give opportunities - take away opportunities. Well, not a function, but a property, quality, and what?

Here you should pay attention that the wishes of users, their expectations regarding the security of a software product are described in negative terms. And this is fundamental. The user expects that the program will not allow an unauthorized person to perform some operations. And creating a software product, we must take into account precisely this expectation. But after all, in any textbook on software development states that the requirements for it should be expressed in positive terms. Requirements expressed in negative terms are incorrect: they are ambiguous, not testable.

But, in dealing with security, the developer has to work with negative expectations and solve a number of tasks.

First, having understood the user's expectations, it is necessary to develop an appropriate specification. In order for the specification to be correct, it must be written in positive terms. Already here the problem arises of translating negative expectations into positive formulations. They are obviously not equivalent. And the developer will have to look for exactly those formulations with the customer, which, on the one hand, would allow to create a program under resource constraints, and, on the other hand, provide an acceptable level of security.

Secondly, it is necessary to design and implement a product in such a way that it satisfies these expectations. On the one hand, it seems that for this it is enough that the product meets the specification that was developed in the first stage. But we all understand that the specification, with all the thoroughness of its development, may not quite accurately reflect the user's expectations even when formulating functional, positively formulated expectations (see, for example, [3]). Therefore, at all stages of the design and implementation of the program, it is necessary to control its compliance with both the specification and the requirements expressed in negative terms.

And, thirdly, it is necessary to prove to the client that we have created a program that will satisfy his needs, including security. Of course, it can be said that if the program meets the specifications, then the customer should have no complaints. Formally, this is so. In practice, at this stage it is desirable to provide the client with arguments proving that the program does not contain vulnerabilities - functionality that violates the security requirements in their negative formulation. If this is not done, the client, having formally adopted the program, may refuse to cooperate further.

Thus, the property of software "security" requires special measures at all stages of development. To discuss these measures, even a brief one, the volume of one article is completely inadequate.

Another consideration


Obviously, software is bought because of its functionality, because of what it does. It is the functionality that interests the client in the first place. No one will buy the program if it does not do something that facilitates the business of the customer, or does not help to have a good time, as various “toys” do, for example.

Safety is a secondary feature. Moreover, security often contradicts functionality (remember, security functions are anti-functions?). To buy a safe, but useless product, no one, of course, will not. Therefore, when developing software for which security requirements are imposed, an important task is to develop not just a safe, but a balanced product. And an understanding of the nature of security, its relationship with the functionality of the product for the successful solution of this task plays an important role in the recruitment of the necessary developer knowledge.

Literature


1. Michael Howard, David Leblank Protected Code Microsoft Press, Russian Edition 2005, ISBN 5-7502-0238-0, p. 2

2.John Viega, Gary McGraw “Building Secure Software. How to Avoid Security Problems the Right Way, ”Addison-Wesley 2005, ISBN 0-201-72152-X, p. 13

3. Myers G. “Software Reliability”, World 1980

Source: https://habr.com/ru/post/115415/


All Articles