
For months, Google tried to disown the growing resentment of the technical public, but on October 8, the dam finally collapsed, buried under
news of an error on the rarely used Google+ network, as a result of which personal information of half a million users could be made public. Google found and closed the vulnerability back in March, at about the same time as the
unpleasant story of Cambridge Analytica gained momentum. However, with the advent of news losses are growing. The user version of Google+ is being
closed , lawmakers involved in protecting privacy in
Germany and the
United States are already looking for opportunities to sue, and former employees of the US Securities and Exchange Commission are openly
talking about what Google has done wrong.
By itself, the vulnerability seems relatively small. The essence of the problem was a specific API for developers with which it was possible to access non-public information. What is important, there is no evidence that someone used it to access personal data, and, considering the dead user base, it is not known how much of this personal data could be seen at all. Theoretically, anyone could access the API, but only 432 people requested it (I repeat, this is Google+), so it can be assumed that none of them even thought of this.
A much bigger problem for Google was not a crime, but an attempt to conceal it. Vulnerability was eliminated in March, but the company did not disclose this information for another seven months, until the
discussion of this error
fell into the hands of The Wall Street Journal. The company, apparently, understood that it was going on - why else should it erase the social network from the face of the earth? - but about what went wrong, and when, everything is very confused, and this situation reveals deeper problems related to how the technomir deals with such jambs related to privacy.
')
Part of the displeasure comes from the fact that, from a legal point of view, Google is clean. There are many laws about the need to report vulnerabilities - basically, this is the
GDPR , but there are also different laws at the country level - however, by their standards, what happened to Google+ cannot, strictly speaking, be called vulnerability. The laws speak of unauthorized access to user information, describing a simple idea: if someone steals your credit card or phone, you have the right to know about it. But Google only found that this data could be available to developers, and not that the data actually leaked somewhere. And without obvious traces of theft, the company is not obliged by law to report this. From the point of view of lawyers, this was not a vulnerability, and it was enough just to quietly solve this problem.
There are arguments that oppose the disclosure of such errors, although, judging by hindsight, they are not so convincing. All systems have vulnerabilities, so the only good strategy in terms of security will be to constantly search for and fix them. As a result, the safest software will be the one in which the most errors were revealed and patched, even if it would seem counterintuitive to an outsider. It will be wrong to force companies to report every mistake - it turns out that the products that care about the users the most will be punished.
Of course, Google itself for years engaged in the sudden exposure of the mistakes of other companies within the framework of the Project Zero project - in particular, that is why critics cannot wait to lash out at the company's obvious hypocrisy. However, the Project Zero team will tell you that reporting about third parties is a completely different calico, and such disclosure usually should be encouraged to correct errors and enhance the reputation of noble hackers who hunt bugs.
Such logic is more suitable for software bugs than for social networks and personal data issues, but in the cybersecurity world it is quite common, and it is no exaggeration to say that it influenced the thinking in Google when they decided to sweep the story under the rug.
But after an unpleasant Facebook crash, it seems that arguments from the world of jurisprudence and cybersecurity are practically irrelevant. The agreement between technocompanies and their users is fragile as ever, and such stories hurt it even more. The problem is not information leakage, but trust leakage. Something went wrong, but no one at Google said that. And besides the report from the WSJ, perhaps nothing would have been known about this. It is difficult to avoid an unpleasant rhetorical question: why don't they tell us anything else?
It is too early to judge whether Google will face negative in response to this incident. A small number of victims and the relative irrelevance of Google+ suggest that it is unlikely. But even if this vulnerability was not critical, such problems pose a real threat to users and companies that they trust. The misunderstandings of how to call it - a mistake, a leak, a vulnerability - are superimposed on the fact that it is even less clear what companies must do for their users when the vulnerability in privacy is significant and how much control we have over their data. These questions turn out to be critically important in our technological era, and if the last few days have taught us something, it is that the industry is still trying to find answers to these questions.