📜 ⬆️ ⬇️

With enough money, all the mistakes come to the surface.

Eric Raymond in his essay "Cathedral and Bazaar" said the famous phrase:

"With a sufficient number of eyes, all the errors float to the surface."

This means that open source software by definition contains fewer errors than closed source software, because the code is available for everyone to learn. Raymond called this observation "the law of Linus."
')
In a sense, of course, it is. If the source code can be seen by only 10 full-time programmers of your company, the results are unlikely to be the same as if this code were on a general review, say, on GitHub.

However, the turning point for Linus' law was the discovery of Heartbleed vulnerability in OpenSSL , a catastrophic exploit as a result of a serious error in open source software . What was the scale of the disaster? About 18% of all HTTPS enabled sites in the world turned out to be vulnerable. As a result, attackers could view all traffic to these sites in an unencrypted form ... for two years .

Did you consider these sites protected? How so. This error was not noticed for two years.

Two years!

The OpenSSL library, where this error appeared, is one of the most critical elements of the Internet infrastructure in the world . Large companies rely on it to encrypt the personal data of their customers for transmission over the Internet. OpenSSL has been used on millions of servers and devices to protect sensitive information that needs to be encrypted and hidden from view, such as passwords, bank account information, and credit card information.

This code should be one of the most trusted in the world. And where were our eyes?

“In fact, fixing the real bugs in anything but the elementary open source software is not just difficult, but mega-complicated. For example, I rarely had to do this, although I am an experienced developer. What happens in most cases? You simply report the problem to the author of the code and expect him to fix it. ”- Neil Gunton .

“Even if there is a bold hacker who reads the code, he is unlikely to notice an error that is difficult to detect. Why? Because there are almost no defense experts among hackers of open source software, ” Jeremy Zawodny .

“The fact that the program looks at many eyes will not make it more protected, but it will make many believe that it is well protected. As a result, we have a community of open source developers who seem to be too trusting when it comes to security. ”- John Viega

I think Linus’s law has a couple of flaws.

1. The eyes of the user and the eyes of the developer are, as they say, two big differences. The fact that you have compiled some binary RPM packages or compiled something in Linux or even found errors and reported them to developers via the bug tracking system does not mean that you somehow help to critically view the code. Most eyes look only at the outside of the code. And although you, as a user, can detect a security problem, and even a serious one, the most harmful bugs require knowledge of how the code works from the inside.

2. It is easier to write (or cut and paste) your own code than to understand and evaluate someone else’s code at the level of an independent expert . Fundamental, inevitable asymmetry is connected with this: the number of lines of code being stamped today, even assuming that only a small part of them deserve serious analysis, is several times higher than the number of eyes that can see them. (Yes, this is another argument in favor of writing less code )

3. To view the code is not enough professional eyes. Undoubtedly, the total number of programmers is gradually increasing, but how many of them have sufficient qualifications and are sufficiently versed in security to effectively perform verification of someone else's code? Negligible amount.

Even if the code is 100% open, designed for critical tasks and is used by large companies in almost all external web servers to ensure client security, this ends in critical errors that affect everyone. Within two years !

It's time to draw conclusions. If we, as it turned out, cannot provide enough eyes for OpenSSL , what are the chances of another code? What do we do? Where to get more eyes?

In the short term:

• Create more OpenSSL alternatives to diversify the ecosystem.

• Improve support and funding for OpenSSL .

Both of these measures are effective and necessary. Both must be done for all critical parts of the open source ecosystem on which people depend.

But how to solve the general problem of lack of eyes for open source in the long term? This decision is familiar to you, although I suspect that Eric Raymond will not be delighted with him.

Money Lots and lots of money.

Today, more and more companies are turning to commercial bug bounty programs . These programs are created either by the companies themselves, or by third-party organizations like Bugcrowd , Synack , HackerOne and Crowdcurity . The company pays for each discovered bug and the bigger and more terrible it is, the greater the amount paid.

An alternative way is to take part in an event, for example, Pwn2Own , where every year there is a competition with a reward of several hundred thousand dollars for exploiting popular software. Annual events of this scale are always widely covered in the media and arouse interest among the largest market players.

This is the main idea. If you want to find errors in your code, on the website, in the application, make it the old-fashioned way - pay for the check. In other words, buy yourself some eyes.

Although I welcome any efforts aimed at improving security, and I absolutely agree that this is a battle that needs to be waged on several fronts, both commercial and non-commercial, I am haunted by some points related to paying money for finding errors today is becoming the norm . What could be the consequences?

Money makes security mistakes go deeper


Now exploiters have a price, and the deeper and less well-known the exploit, the more tempting it is not to tell anyone about it until you hit a bigger sum. Therefore, it makes sense to wait almost a year before reporting a security problem, and in the meantime this problem is not going anywhere - how to know who else will find it?

If the main question is money, who pays more? Good guys or bad guys? Which is better - wait, then earn more, or improve the exploit so that it becomes even more dangerous? I hope, for the sake of our common good, that the good guys have pockets deeper, otherwise all of us have a bad time.

I like the fact that Google has already started to solve this problem by changing the terms of Pwnium, its own version of Pwn2Own for Chrome, so that a) money can be received at any time and b) the amount has become unlimited. I don’t know if that’s enough, but they are definitely moving in the right direction.

Money makes security a profit


I first noticed such a tendency when a couple of people reported minor security problems in Discourse — and I felt they stopped waiting for a reward (as far as could be expressed in an email). It was strange, and I felt awkward.

So, now I’m obliged not only to give the world absolutely free open source software, but also to pay those who share information about security problems, thereby improving its quality? Believe me, I really appreciate the reports on security issues, so I sent all I could to these people: stickers, t-shirts, lengthy letters of thanks, mentioned them in the code and explanations to the edits. But open source is not made for the money ... so?

Perhaps, with closed source codes, the picture is different: these are commercial products, the principle of “service for service” does not work here, and people pay for the service in one way or another, directly or indirectly.

No money - no protection


If all the best researchers in the field of software protection will catch bugs for more and more remuneration and all large companies will switch to such a scheme of work, how will this affect the software industry?

This will mean that if you do not have a large budget, you will not be able to provide normal protection, because no one wants to report errors to you. Why on earth? After all, they will not be paid. They will look for problems in other programs.

The extortionary principle “pay me, otherwise I will not tell you about your terrible bug” no longer seems wild. We already receive such letters.

Easy money attracts everyone


The unpleasant side effect of the idea of ​​paying for bugs is that it attracts not only bona fide programmers, but in general everyone who is interested in easy money .

We have received too many “critical” security reports, the value of which turned out to be almost nil. But we still have to check them, because these are the “most important” reports, right? Unfortunately, many of them are a waste of time because ...

• the author of the report is much more interesting to scare you by the grandiose consequences of a critical security breach due to a detected bug than to give him a clear explanation, so in the end you will have to do all the work yourself;

• the author of the report does not understand what an exploit is, but he knows that everything that looks like an exploit has a certain value, and therefore sends reports on everything that it finds;

• the author of the report cannot share information with other researchers and make sure that he really succeeded in the exploit, because the discovery can be “stolen” and get money for it;

• The author of the report needs to convince you that this is an exploit to make money, so he will prove to you that he is right. Long and hard.

These motives seem completely wrong to me. Naturally, I know that security is extremely important, but at the same time I look at this solution to the problem with ever-increasing horror, because it adds to my work, and there is almost no return from it.

What can be done?


Fortunately, we have a common goal - to make the software more secure .

Therefore, we should treat catching bugs for money as an additional weapon or another line of “deep defense” - perhaps a bit more suitable for commercial projects with an adequate budget. Then it is normal.
But I would like to give advice to those who implement commercial bug-trapping programs:

• It is necessary that someone first carefully checks these error reports: can they be trusted, are there clearly described reproduction steps in them, are they realizable.

• You need to create additional incentives in your community to quickly and effectively search for vulnerabilities. Researchers must work together without hiding anything from each other.

• It is necessary to develop a reputation creation system so that only the best, verified participants can go through all the stages and submit a report.

• You need to convince large market players to fund bug-trapping programs for common open source projects, and not just for their own closed-source applications and sites. Every year, at Stack Exchange, we provided financial assistance to the open source projects we used. Gratuitous funding for catching bugs can increase the number of eyes that check the code.

I am afraid that soon we will live in a world where, with enough money, all the mistakes float to the surface . Money does create some false software security incentives, and these incentives need to be controlled.

But I still believe that people will themselves report security problems with open source software, because

• this is correct ™

and

• they want to help projects that once helped them, and then

... we will live a little longer in the normal world - I would like to hope.

Translated by ABBYY Language Services

Source: https://habr.com/ru/post/258197/


All Articles