📜 ⬆️ ⬇️

The book "How to survive the full end of the dinner, or security in PHP." Part 1

image

Big Five Part 3 by CrazyAsian1

Hey. My name is Sasha Barannik. In Mail.Ru Group, I manage a web development department consisting of 15 employees. We have learned to create websites for tens of millions of users and calmly cope with several million daily audience. I myself have been doing web development for about 20 years, and for the last 15 years I have been programming mostly in PHP. Although the capabilities of the language and the approach to development have changed dramatically during this time, an understanding of the main vulnerabilities and the ability to protect against them remain key skills of any developer.
')
On the Internet you can find many articles and safety guides. This book seemed to me quite detailed, with a concise and understandable. I hope it helps you learn something new and make your sites safer and more secure.

PS The book is long, so the translation will be laid out in several articles. So let's get started ...

Another security book in PHP?


There are many ways to start a PHP security book. Unfortunately, I have not read any of them, so I’ll have to figure it out while writing. Perhaps I will start from the most basic and hope that everything will work out.

If we consider an abstract web application launched online by company X, then we can assume that it contains a number of components that, if cracked, can cause substantial harm. What, for example?

  1. Harm users: access to e-mail, passwords, personal data, bank card details, business secrets, contact lists, transaction history and deeply protected secrets (such as what someone called his dog Blackjack). Leakage of this data harms users (individuals and companies). Web applications that incorrectly use such data and sites that use the trust of users to their advantage can also be harmful.
  2. Harm of company X itself: reputation deteriorates due to damage caused to users, compensation has to be paid, important business information is lost, additional expenses arise - on infrastructure, improving security, eliminating consequences, court costs, large benefits for dismissed top managers, etc.

I will focus on these two categories, as they involve most of the trouble that a web application security system should prevent. All companies that are faced with serious security breaches, quickly write in press releases and on websites, as they trembled to her. So I advise you in advance with all your heart to feel the importance of this problem before you come across it in practice.

Unfortunately, security issues are very often solved retroactively. It is considered that the most important thing is to create a working application that meets the needs of users, with an acceptable budget and time limit. This is a completely understandable set of priorities, but security cannot always be ignored. It is much better to keep it in mind all the time, introducing specific solutions during development, when the cost of changes is still small.

Security secondaryity is largely the result of a programming culture. Some programmers are covered in cold sweat at the thought of vulnerability, while others may challenge the existence of a vulnerability until they can prove that it is not a vulnerability at all. Between these two extremes there are a lot of programmers who just shrug their shoulders, because they haven’t yet gone all wrong. They find it difficult to understand this strange world.

Since the web application security system must protect users who trust application services, you need to know the answers to the questions:

  1. Who wants to attack us?
  2. How can they attack us?
  3. How can we stop them?

Who wants to attack us?


The answer to the first question is very simple: everything and everyone. Yes, the whole universe wants to teach you a lesson. The kid with the overclocked computer, which runs Kali Linux? He probably attacked you already. Suspicious man who loves to put a stick in the wheel? He probably already hired someone to attack you. Trusted REST API, through which you receive data every hour? He was probably hacking a month ago to pop up infected data. Even I can attack you! So do not blindly believe this book. Consider me lying. And find a programmer who will lead me to clean water and expose my bad advice. On the other hand, maybe he is also going to hack you ...

The meaning of this paranoia is to make it easier to mentally categorize everything that interacts with your web application ("User", "Hacker", "Database", "Unreliable input", "Manager", "REST API") and then assign each category an index of trust. Obviously, “Hacker” cannot be trusted, but what about the “Database”? “Unreliable Input” got its name for a reason, but will you really be filtering a blog post obtained from your colleague’s trusted Atom feed?

Those who are seriously involved in hacking web applications learn to take advantage of this thinking, often attacking not vulnerable data sources, but trusted ones, who are less likely to have a good protection system. This is not a random decision: in real life, subjects with a higher confidence index are less suspicious. I first of all pay attention to such data sources when analyzing the application.

Let's return to "Databases". If we assume that a hacker will be able to access the database (and we, the paranoids, always assume it), then she can never be trusted. Most applications trust databases without any questions. Outside, a web application looks like a single whole, but inside it is a system of individual components that communicate. If we assume that all these components are trusted, then if one of them is hacked, all the rest will be compromised quickly. Such catastrophic security problems cannot be resolved with the phrase "If the base is hacked, then we still lose." You may say so, but it’s not at all a fact that you have to do this if you don’t initially trust the database and act accordingly!

How can they attack us?


The answer to the second question is a rather extensive list. You can be attacked from anywhere, from where each component or layer of the web application receives data. In fact, web applications simply process the data and move it from place to place. User requests, databases, APIs, blog feeds, forms, cookies, repositories, PHP environment variables, configuration files, configuration files again, even PHP files you execute - all of them can be potentially infected with data to break through the security system and damage . In fact, if the malicious data is not explicitly present in the PHP code used for the request, then it is likely that they will come as a “payload”. It is assumed that a) you wrote the source PHP code, b) it was correctly reviewed, and c) you were not paid by representatives of criminal organizations.

If you use data sources without checking whether the data is completely safe and suitable for use, then you are potentially open to attack. You also need to check that the received data matches the data you are sending. If the data is not made completely safe for withdrawal, then you will also have serious problems. All of this can be expressed as a rule for PHP “Verify input; screen output.

These are obvious sources of data that we must somehow control. Sources can also include storage on the client side. For example, most applications recognize users by assigning them unique session IDs that can be stored in cookies. If the attacker gets the value from the cookie, then he can impersonate another user. And although we can reduce some of the risks associated with the interception or falsification of user data, we are unable to guarantee the physical security of the user's computer. We can not even guarantee that users will find "123456" stupid password after "password". Additional piquancy comes from the fact that today cookies are not the only type of storage on the user’s side.

Another risk often overlooked is the integrity of your source code. In PHP, application development based on a large number of weakly interconnected libraries, modules and packages for frameworks is becoming increasingly popular. Many of them are downloaded from public repositories such as Github, and are installed using package installers like Composer and its web companion Packagist.org. Therefore, the security of the source code is completely dependent on the security of all these third-party services and components. If Github is compromised, then most likely it will be used to distribute the code with a malicious additive. If Packagist.org, then an attacker will be able to redirect requests for packages to their own, malicious packages.

Composer and Packagist.org are currently subject to known vulnerabilities in dependency determination and distribution of packages, so always check everything twice in your working environment and check the source of all packages from Packagist.org.

How can we stop them?


Breaking the security of a web application can be a simple task to the point of absurdity, and extremely time consuming. It is fair to assume that there is a vulnerability somewhere in every web application. The reason is simple: all applications are made by people, and people tend to make mistakes. So perfect security is an impossible dream. All applications may contain vulnerabilities, and the task of programmers is to minimize risks.

You will have to think carefully to reduce the likelihood of damage from an attack on a web application. In the course of the story, I will talk about possible methods of attack. Some of them are obvious, others are not. But in any case, to solve the problem it is necessary to take into account some basic principles of security.

Basic safety principles


When developing remedies, their effectiveness can be assessed using the following considerations. Some I have already cited above.

  1. Do not believe anyone or anything.
  2. Always assume the worst case scenario.
  3. Apply multi-level protection (defense-in-Depth).
  4. Adhere to the principle “the simpler the better” (Keep It Simple Stupid, KISS).
  5. Adhere to the principle of "minimum privilege."
  6. Attackers smell vagueness.
  7. Read the documentation (RTFM), but never trust it.
  8. If it is not tested, then it does not work.
  9. It is always your mistake!

Let's take a quick look at all the points.

1. Do not trust anyone or anything


As mentioned above, the correct position is to assume that everyone and everything your web application interacts with want to hack it. Including other components or application layers that are needed to process requests. Everything and everyone. With no exceptions.

2. Always assume the worst case scenario.


Many security systems have a common feature: no matter how well they are made, each can be punched. If you take this into account, you will quickly understand the advantage of the second item. Orientation to the worst scenario will help assess the extent and severity of the attack. And if it really happens, then you may be able to reduce unpleasant consequences due to additional means of protection and changes in architecture. Perhaps the traditional solution that you are using has already been replaced by something better?

3. Apply Multi-Level Protection (Defense-in-Depth)


Multi-level protection borrowed from military science, because people have long realized that the numerous walls, sandbags, equipment, body armor and flasks covering vital organs from enemy bullets and blades are the right approach to security. You never know which of the above will not protect, and you need to make sure that several levels of protection allow you to rely on more than field support or battle formation. Of course, it's not just single failures. Imagine an attacker who climbed a giant medieval wall up the stairs and found that there was another wall behind it, from where it was showered with arrows. Hackers will feel the same way.

4. Adhere to the principle “the simpler the better” (Keep It Simple Stupid, KISS)


The best defenses are always simple. They are easy to develop, implement, understand, use and test. Simplicity reduces the number of errors, stimulates the correct operation of the application and facilitates implementation even in the most complex and unfriendly environments.

5. Adhere to the principle of "minimum privileges"


Each participant in the exchange of information (user, process, program) must have only those access rights that he needs to perform their functions.

6. Attackers smell the vagueness.


“ Security through vagueness ” is based on the assumption that if you use A protection and don't tell anyone that this is how it works and whether it exists at all, then it magically helps you because the attackers find themselves at a loss. In fact, this gives only a slight advantage. Often an experienced attacker is able to calculate the measures you have taken, so you need to use explicit means of protection. Those who are too sure that vague protection eliminates the need for real protection should be specifically punished in order to get rid of illusions.

7. Read the documentation (RTFM), but never trust it.


The PHP manual is the Bible. Of course, it was not written by the Flying Pasta Monster , so it may formally contain some amount of half-truths, flaws, misinterpretations or mistakes that have not yet been noticed or not corrected. The same goes for stack overflow.

Specialized sources of wisdom in the field of security (focused on PHP and not only) generally provide more detailed knowledge. The closest to the Security Bible in PHP is the OWASP site with articles, guides and tips offered on it. If something is not recommended at OWASP - never do it!

8. If it is not tested, then it does not work.


Introducing protection, you must write all the necessary tests to verify. Including pretend that you are a hacker for whom the prison is crying. This may seem far-fetched, but familiarity with web application hacking techniques is good practice; you will learn about possible vulnerabilities, and your paranoia will increase. At the same time, it is not necessary to tell the management about the freshly acquired gratitude for hacking the web application. Be sure to use automated tools to identify vulnerabilities. They are useful, but of course they do not replace the qualitative review of the code, and even manual testing of the application. The more resources you spend on testing, the more reliable your application will be.

9. It is always your mistake!


Programmers are accustomed to believing that security vulnerabilities will be found by scattered attacks, and their effects are minor.

For example, data leaks (a well-documented and widespread form of hacking) are often viewed as minor security problems because they do not directly affect users. However, data leakage about software versions, development languages, source code location, application logic and business logic, database structure and other aspects of the web application environment and internal operations is often important for a successful attack.

At the same time, attacks on security systems are often combinations of attacks. Individually, they are unimportant, but sometimes they open the door to other attacks. For example, to inject SQL code sometimes requires the name of a specific user, which can be obtained by using a Timing Attack against the administrative interface, instead of a much more expensive and noticeable brute force. In turn, SQL injection allows you to implement a XSS attack on a specific administrative account, without attracting the attention of a large number of suspicious log entries.

The danger of an isolated consideration of vulnerabilities is in the underestimation of their threat, and hence in too careless attitude towards them. Programmers are often too lazy to fix a vulnerability because they consider it too insignificant. Shifting responsibility for safe development to end programmers or users is also practiced, often without documenting specific problems: even the existence of these vulnerabilities is not recognized.

Seeming insignificance is not important. It is irresponsible to force programmers or users to fix your vulnerabilities, especially if you are not even informed about them.

Input Validation


Input validation is the external defense perimeter of your web application. It protects the core business logic, data processing, and output generation. In a literal sense, everything outside this perimeter, with the exception of code executed by the current request, is considered an enemy territory. All possible entrances and exits of the perimeter are guarded day and night by militant guards, who first shoot and then ask questions. Separately guarded (and very suspicious-looking) "allies" are connected to the perimeter, including "Model", "Database" and "File System". Nobody wants to shoot at them, but if they try to try their luck ... bang. Each ally has its own perimeter, which may or may not trust ours.

Remember my words about who you can trust? None and nothing. In the PHP world, the advice to not trust “user-entered data” is everywhere. This is one of the categories according to the degree of trust. Assuming that users cannot be trusted, we think that everything else can be trusted. This is not true. Users are the most obvious unreliable source of input data, because we do not know them and cannot manage them.

Verification criteria


Input validation is both the most obvious and unreliable protection of a web application. The overwhelming majority of vulnerabilities are due to failures of the verification system, so it is very important that this part of the protection works correctly. It may fail, but still adhere to the following considerations. Always consider when implementing custom validators and using third-party libraries for validation that third-party solutions usually perform common tasks and omit key verification procedures that your application may need. When using any libraries designed for security needs, be sure to independently check them for vulnerabilities and correct operation. I also recommend not to forget that PHP can exhibit strange and possibly unsafe behavior. Take a look at this example, taken from filtering functions:

filter_var('php://example.org', FILTER_VALIDATE_URL); 

Filter passes without question. The problem is that the received URL php: // can be passed to a PHP function that is waiting to receive a remote HTTP address, and not to return data from an executable PHP script (via a PHP handler). Vulnerability occurs because the filtering option does not have a method that restricts the valid URI. Despite the fact that the application expects links http, https or mailto, and not some kind of URI, typical of PHP. It is necessary by all means to avoid a similar, too general approach to verification.

Be careful with the context.


Input validation should prevent the input of unsafe data into the web application. Serious stumbling block: a data security check is usually performed only for the first intended use.

Suppose I received data containing a name. I can simply check it for the presence of apostrophes, hyphens, brackets, spaces, and a variety of alphanumeric Unicode characters. The name is valid data that can be used for display (first intended use). But if you use it somewhere else (for example, in a database query), it will be in a new context. And some of the characters that are valid in the name will be dangerous in this context: if the name is converted to a string to execute the SQL injection.

It turns out that the verification of input data is essentially unreliable. It is most effective for clipping unambiguously invalid values. Let's say when something should be an integer, or alphanumeric string, or HTTP URL. Such formats and values ​​have their limitations and, with proper verification, are less likely to pose a threat. Other values ​​(unlimited text, GET / POST arrays and HTML) are more difficult to check and the probability of getting malicious data in them is higher.

Since most of the time our application will transfer data between contexts, we cannot simply check all the input data and consider the case completed. The input check is only the first protection circuit, but by no means the only one.

Along with checking the input data, a protection method such as shielding is often used. With its help, data is checked for security when entering each new context. Typically, this method is used to protect against cross-site scripting (XSS), but it is also used in many other tasks as a filtering tool.

Escaping protects the receiver from erroneous interpretation of outgoing data. But this is not enough - as data becomes available in a new context, verification is needed specifically for a particular context.

Although this may be perceived as duplication of validation upon initial input, in fact the additional verification steps better take into account the characteristics of the current context, when the data requirements are very different. For example, data coming from a form may contain a percentage. When first used, we verify that the value is really integer. But when transferring to the model of our application, new requirements may arise: the value must fit into a certain range, which is mandatory for the business logic of the application to work. And if this additional check is not performed in the new context, serious problems may arise.

Use only white lists, not black ones.


Black and white lists are the two primary approaches to validating input data. Black means checking for invalid data, and white - for valid. Whitelists are preferable, because when checking only those data that we expect are transmitted. In turn, the black lists take into account only the assumptions of programmers about all possible erroneous data, so it is much easier to get confused, to miss something or to make a mistake.

A good example is any verification procedure designed to make HTML safe in terms of unshielded output in a template. If you use the blacklist, then we need to check that HTML does not contain dangerous elements, attributes, styles and executable JavaScript. This is a lot of work, and HTML cleanup tools based on blacklists always manage to ignore dangerous code combinations. And means using whitelists eliminate this uncertainty by allowing only known permitted elements and attributes. All the rest will be simply separated, isolated or removed, regardless of what they are.

So whitelists are preferable for any verification procedures due to higher security and reliability.

Never attempt to correct input.


Input validation is often accompanied by filtering. If, when checking, we simply evaluate the correctness of the data (with the issuance of a positive or negative result), then the filtering changes the data to be checked to meet specific rules.

This usually hurts a little. Traditional filters include, for example, removing all characters from telephone numbers, with the exception of numbers (including unnecessary brackets and hyphens), or cropping unnecessary horizontal or vertical space. In such situations, minimal cleaning is performed to eliminate errors in display or transmission. However, you can get too carried away using filtering to block malicious data.

One of the consequences of trying to correct the input: an attacker can predict the effect of your corrections. Suppose there is some invalid string value. You search for it, delete and complete filtering. But what if the attacker creates a value, separated by a string, to outsmart your filter?

 <scr<script>ipt>alert(document.cookie);</scr<script>ipt> 

In this example, simple tag filtering will not do anything: deleting the explicit <script> will result in the data being considered a completely valid element of the HTML script. The same can be said about filtering for any specific format. All this clearly demonstrates why it is impossible to check the input data with the last protective circuit of the application.

Instead of trying to correct the input data, just use a whitelist validator and completely reject such input attempts. And where you need to filter, always filter before performing the test, never after.

Never trust external verification tools and constantly monitor vulnerabilities.


Earlier, I noted that verification is necessary every time data is transferred to a new context. This also applies to validation performed outside the web application itself. These tools include checking or other restrictions that apply to HTML forms in the browser. Look at this form from HTML 5 (tags omitted):

 <form method="post" name="signup"> <input name="fname" placeholder="First Name" required /> <input name="lname" placeholder="Last Name" required /> <input type="email" name="email" placeholder="someone@example.com" required /> <input type="url" name="website" required /> <input name="birthday" type="date" pattern="^d{1,2}/d{1,2}/d{2}$" /> <select name="country" required> <option>Rep. Of Ireland</option> <option>United Kingdom</option> </select> <input type="number" size="3" name="countpets" min="0" max="100" value="1" required /> <textarea name="foundus" maxlength="140"></textarea> <input type="submit" name="submit" value="Submit" /> </form> 

HTML forms are able to impose restrictions on the input data. You can limit the selection using a list of fixed items, set minimum and maximum values, and limit the length of the text. HTML 5 features are even wider. Browsers can check URLs and email addresses, control dates, numbers and ranges (although support for the last two is rather arbitrary). Also, browsers are able to validate input using JavaScript regular expressions included in the template attribute.

With all this abundance of controls, we must not forget: their purpose is to improve the convenience of your application. Any attacker is able to create a form that will not contain restrictions from your original form. You can even create an HTTP client for automated form filling!

Another example of external validators is getting data from third-party APIs, such as Twitter. This social network has a good reputation and is usually trusted without question. But since we are paranoid, you shouldn’t even trust Twitter. When compromised, unsafe data will appear in his answers, for which we will not be ready. Therefore, even here, use your own test, so as not to be defenseless in case of anything.

Where we are confident in external means of verification, it is convenient to track vulnerabilities. For example, if an HTML form sets a limit on the maximum length and we get input data whose size has reached the limit, then it is logical to assume that this user is trying to bypass the check. Thus, we can register gaps in external funds and take further action against potential attacks, limiting access or the number of requests.

Avoid type conversion in PHP


PHP is not a strongly typed language, and most of its functions and operations are unsafely typed. This can lead to serious problems. And not the values ​​themselves are particularly vulnerable, but validators. For example:

 assert(0 == '0ABC'); // TRUE assert(0 == 'ABC'); // TRUE (    !) assert(0 === '0ABC'); // NULL/      

When developing validators, make sure that you use strict comparison and manual type conversion when the input or output values ​​can be a string. For example, forms can return a string, so if you work with data that must be integer, be sure to check its type:

 function checkIntegerRange($int, $min, $max) { if (is_string($int) && !ctype_digit($int)) { return false; //    } if (!is_int((int) $int)) { return false; //       PHP_MAX_INT } return ($int >= $min && $int <= $max); } 

Never do this:

 function checkIntegerRangeTheWrongWay($int, $min, $max) { return ($int >= $min && $int <= $max); } 

In this case, any string starting with a number that falls within the desired range will successfully pass the test.

 assert(checkIntegerRange("6' OR 1=1", 5, 10)); // NULL/  assert(checkIntegerRangeTheWrongWay("6' OR 1=1", 5, 10)); //  TRUE 

The subtleties of type conversion are found in many operations and functions, for example in_array() , which is often used to check the value of the presence of valid variants in the array.

Types of data verification


Errors in validating input data lead to vulnerabilities and data corruption. Below, using PHP as an example, we will consider a number of types of verification.

Check data type


We just check what type the data belongs to: string, integer, floating point numbers, arrays, etc. Since many data come in through forms, we cannot blindly use PHP functions like is_int() - because one value can be a string and still reach the maximum value for integers, which is supported in PHP. Neither need to be excessively inventive, nor is it customary to refer to regular expressions, as this may violate the KISS principle.

Character Validation


We check that the string contains only valid characters. Most often, PHP uses ctype functions for this, and for more complex cases, regular expressions are used. If you only need ASCII characters, then it is best to dwell on ctype functions.

Format check


This ensures that the data matches a specific set of valid characters. Notable examples are email, URLs and dates. The best way to use the PHP function is filter_var() , the DateTime class, and for other formats regular expressions. The more complex the format, the more you need to lean towards reliable tools for checking the format or syntax.

Check restrictions


Here it is checked whether the value is in the allowed range. For example, we only need to accept values ​​greater than 5, or from 0 to 3, or not equal to 34. The constraint check can also be applied to strings, file sizes, image resolutions, date ranges, etc.

Check for data


We check whether all the data necessary for further work are available. For example, in the registration form, it is the login, password and email address. If something is not entered, then we consider the data incorrect.

Check for data match


This type of test is used when you need to enter two identical values ​​to eliminate the error. For example, repeat the password when registering on the site. If the two values ​​are identical, the data is considered correct.

Logical check


In essence, this is error control when we want to make sure that the data obtained will not lead to an error or an exception in the application. For example, when substituting the search string into a regular expression, an expression compilation error may occur. Integers that exceed a certain value, a zero in the denominator of a division operation, or strangeness like +0, 0, and –0 can be dangerous.

Check for a resource


When a resource is indicated in the data, it is necessary to check whether it really exists. This is almost always accompanied by additional checks on the automatic creation of non-existent resources, on the exclusion of opening erroneous resources, and on attempts to substitute file system paths in order to perform a directory traversal attack.

Validation of input sources


Despite our efforts, input validation does not solve all security problems. Very often, it is not possible to reliably verify the information entered by users. The likelihood of this is increased when the application is working with data sources that are considered trusted (for example, a local database). In the case of a database of additional types of checks are not so much. But let's look at an example of a remote web service that is protected using SSL or TLS, for example, when we request information from the API via HTTPS.

HTTPS is the main way to protect against man-in-the-middle (MITM) attacks when an attacker becomes an intermediary between two data exchange points. Such an intermediary impersonates a server. That is, customers think they are connecting to the server; in fact, this is an attacker who creates a separate connection to the requested server. The attacker retransmits the data in both directions and can read them, and neither the clients nor the server are aware of this. Moreover, the mediator is able to change the data during the retransmission.

To prevent such attacks, it is necessary for attackers to shut down the ability to impersonate a server and read messages exchanged between the server and clients. To do this, there is SSL / TLS, which performs two basic security functions:

  1. It encrypts all transmitted data with a shared key, which is accessible only to the client and server.
  2. It requires the server to identify itself using a public certificate and private key issued by a trusted organization and recognized by the client.

Keep in mind that encryption using SSL / TLS can occur between any two parties. When attacking the “man in the middle”, the client contacts the attacking “server” and begins to discuss the use of mutual data encryption. In this case, it is useless in itself, because we did not ask the “server” to prove that he is the one for whom he claims to be. Therefore, a second stage of SSL / TLS operation is required, formally optional. The web application MUST verify the identity of the server with which it communicates in order to protect against the man-in-the-middle attack.

It is widely believed that only encryption is enough to protect against such attacks, and many applications and libraries do not use the second stage. This is a common and easily discovered vulnerability in open source applications. For some incomprehensible reasons, PHP itself by default disables checking servers in its HTTPS wrapper when using stream_socket_client() , fsockopen() or other internal functions. For example:

 $body = file_get_contents('https://api.example.com/search?q=sphinx'); 

There is obvious vulnerability to the man-in-the-middle attack. Any data from this HTTPS request should never be considered as received from the service we need. According to the mind, the request should be carried out with checking server:

 $context = stream_context_create(array('ssl' => array('verify_peer' => TRUE))); $body = file_get_contents('https://api.example.com/search?q=sphinx', false, $context); 

UPD. In PHP 5.6+, the ssl.verify_peer option is set to TRUE by default.

The cURL extension includes checking the server out of the box, so you can not configure anything. However, programmers sometimes thoughtlessly approach the security of their libraries and applications. This approach can be found in any libraries that your application will depend on.

 curl_setopt(CURLOPT_SSL_VERIFYPEER, false); 

Disabling server validation in an SSL context or using curl_setopt() will lead to vulnerability to man-in-the-middle attacks. But it is turned off just to solve the problem of annoying errors that indicate an attack or application attempts to contact a host whose SSL certificate is configured incorrectly or expired.

Web applications can often act as proxies for user actions, for example, as a Twitter client. And the least we can do is to observe high standards in our applications, set by browsers that warn users and try to protect them from connecting to suspicious servers.

findings


Often we have all the capabilities to create a secure application. But we ourselves go around some reasonable limitations to facilitate development, debugging and disabling the output of annoying errors. Or, out of good intentions, we try to unnecessarily complicate the logic of the application.

But hackers, too, knowingly eat their own bread. . -, — . .

Source: https://habr.com/ru/post/310726/


All Articles