
In January 2015, David Brookman, a PhD candidate at the time, a student at the University of California at Berkeley, discovered something strange in a study that he and his fellow student, Joshua Calla, were trying to repeat. According to the experiment, obtained by UCLA graduate student Michael Lakor and published in Science in December last year, gay agitators who worked during the election campaign directly with residents of various districts of California, after a short talked about the equal rights of different types of marriage, during which they revealed own sexual orientation, won the favor of voters on this issue.
Bucman and Calla
found out that the study was conducted with some “violations”. Firstly, the information received did not arouse much interest from the public, as is usually the case with experiments. Moreover, Bukman and Kalla could not even partially achieve the results described by Lacor. Turning to the company, with the support of which Lacor conducted his research, they learned that no such analysis had been carried out. It seemed that the experiment was not at all.
')
In May, Brookman, Calla, and their acquaintance, scientist Peter Aranov, voiced their suspicions about the inconsistency of the report in 27 pages. Nine days later, in Science, an article appeared with a refutation that slightly puzzled the scientific community and made the public figures who like to flaunt data about new discoveries blush without bothering themselves with critical arguments.
No exception was the radio host Ira Glass. He referred to the study in one of the editions of the popular program "This American Life". In a post that appeared a little later on the transfer blog page, Glass explained, “We prepared the story with serious scientific evidence ... which confirmed that the agitators did influence ... Our story was based on information that was relevant at that time. Obviously, the facts have changed. ”
Or rather, these facts were simply never such facts.
Glass's statement once again points to several common weaknesses in journalism: first, the tendency to rely on the results of a single study and constantly focus attention on them; secondly, the lack of skepticism in the face of so-called "serious scientific evidence." And this is characteristic of only journalism. Even some sociologists, when asked why they did not doubt the authenticity of illogical facts, appealed to the authority of Donald Green, a respected Colombian professor who acted as a co-author of the study. (In turn, Green, for sure, was also unpleasantly surprised and, perhaps, even accelerated the appearance of disproving materials in Science).
It is possible that many public figures from among the figures of the media and science just wanted to believe in the results of the experiment. Brookman and Calla set out to repeat the study of Lacor, and not to refute the data. They admired his approach and recollect that the first reaction was not to suspect Lacor of lying, but to fully accept the published facts, despite all oddities.
Moreover, the case of Lacor is only a separate item on the list of many unpleasant revelation stories that cast doubt on the authenticity of published scientific works. It is enough to analyze the incessant flow of contradictory news about the effects of various products on health status: soybeans, coffee, olive oil, chocolate, red wine - according to one study, all of them have “wonderful properties”, while other experts claim that they are extremely harmful. However, the most serious blow to science was most likely struck by a recent attempt to replicate 100 published studies on psychology during a large-scale campaign, less than half of which were really able to be carried out again. In fact, only 39% of them. And no one says that the results of 61% (or even some of these experiments) were falsified. According to the report prepared during the campaign (also published in Science), “Scientific discoveries should be accepted as reliable not because of the status or influence of their author, but with the possibility of re-obtaining confirmatory results. Even studies of exceptionally high quality can lead to irreproducible empirical conclusions due to random or systematic errors. ”
In other words, published does not mean reliable.
It seems that science and journalism are clearly incompatible: where journalists prefer immaculate storylines, science advances in spurts, from a false start, turning off the beaten track but continuing along the long, thorny path to the truth or, at least, to scientific consensus . Stories reporters decide to work with can also play a cruel joke, ”confesses Peter Eldhaus, teacher of investigative journalism within the framework of the Scientific Communications Program at the University of California Santa Cruz, teacher at the Berkeley Graduate School of Journalism and Science, and scientific reporter at BuzzFeed. He explains: “As journalists, we are usually drawn to illogical, surprising stories from the category of“ people bit the dog. ” In science, such an approach is unlikely to lead to obtaining reliable information. ” That is, simply put, if the conclusions seem improbable or abnormal, most likely they are wrong.
On the other hand, even when worthy discoveries are published, they do not always cause a flurry of enthusiasm, as the author can expect. “For a simple average person, a lot of basic research is boring,” smiles Robert McCown, a former professor of public policy at Berkeley, now an associate at Stanford Law School. “I think if I tried to explain to a party at a party that it was really interesting, I would have received a jambling.” Thus, journalists and their editors could slightly embellish the results of the study, add a link with an explanation at the end, find an excellent screaming title - and, perhaps, not to mislead them - but in the end readers make false conclusions.
According to Eldhaus, this is something that scientific journalists should avoid. “I do not speak in support of the boring headlines and I don’t ask you to give up the endless reservations in the afterword. But, maybe, choosing a regular plot, it is worth paying more attention to prudence and skepticism. ”
Journalism involves constant tension and the cause of many of the shortcomings that exist in this area is precisely the time. News goes on as usual, and journalists have to resort to all sorts of tricks to stay in the top. Sometimes, trying to catch the deadline, accuracy goes into the background. So why not slow down and make sure that we are talking about verified facts?
Eldhaus says that by focusing on a critical analysis of the scientific data associated with a particular discovery, you can get stuck for a long time. “If you refine everything and check every word for plot details, you will most likely not stay in a good position in major news companies.” One day, Eldhaus was also fortunate enough to expose a researcher who forged stem cell images in the laboratory of the University of Minnesota. The two reports were ultimately refuted, but it took years of hard work. “It takes a lot of time, if you seriously intend to understand the nuances,” says Eldhaus.
Talking about his experience, he is trying to determine the main cause of scientific journalism failures. “In the news you need to present exciting news. The problem is to understand the principles of the world of science - to realize that the fascinating new, perhaps, or even, most likely, will eventually become empty. ”
But in addition to difficulties in translating scientific ideas, there are also nuances arising in scientific journalism under the influence of PR trends. The phenomenon is not new, but still. In a 1993 paper, one of the authors of the Chicago Tribune, writer John Creudson, put the question squarely: are not journalists who trust all the press releases of scientific organizations, something like “fun fans” compared to reporters who rely solely on verified facts? And he knowingly expressed concern. In 2014, the British Medical Journal conducted a study that found that the less reliable the university press releases were, the more inaccuracies appeared in the accompanying news reviews.
Eldhaus states: “Scientists must be basic skeptics, and journalists broadcasting news from the realm of science must be doubters. We are obliged to question everything. ” Eldhaus believes that, while working on the storyline, scientific journalists need to focus less on "what's new and interesting in Nature’s press releases, and pay more attention to the substance of the material."
Do not focus on the fact that a huge number of reports predetermines cultural development. Eldhaus is confident that if this happens, “it will be possible to harmoniously move on to the presentation of more entertaining and at the same time truly socially significant” content.
A new stage in the development of journalism, at best, halfway through. The falsified results of the Lacor study were not widely publicized in the media, especially since they only harmed the author himself. And, in spite of the fact that there are not so many obviously counterfeit scientific works, science is not said to encourage worthy work.

Professor of public policy, McCown, sums up that the demands placed on novice researchers today are tougher than ever in history. “We have so many excellent specialists who complete graduate school with so many publications that were previously enough to become an associate professor,” he admits. "This is just an arms race." According to McCown, the concern is not so much falsification of data as “the fact that even conscientious scientists unwittingly contribute to the aggravation of the trend, without end doing research after research and publishing only the results of successful experiments.”
Michael Eisen, a biologist from the University of Berkeley, an ardent critic of the structure of scientific journalism, believes that the number of publications is not the only problem. One of the reasons why the development of Lacor acquired a wide resonance was the fact that they appeared in Science - an outstanding, influential publication that places extremely high demands on those who want to publish their works in a journal. Eisen argues that for many university hiring committees such articles look more than convincing, and no one is in a hurry to doubt their authenticity.
The critic explains that in the past, journals were simply a way of organizing meetings of small groups of amateur scientists. But after they began to perceive scientific work as a full-fledged profession, the journals focused on checking the results of research and, having very limited resources at their disposal, published only the most interesting reviews. Eisen is confident that the selection process, not devoid of subjectivity and a desire to tell the public about certain scientific trends, as a rule, neglects the integrity of experimental methods in favor of bright results. "Magazines were created to serve science, but now everything is the opposite - they become a kind of driving force of scientific knowledge, and not a tool for their dissemination."
Like the majority of mass media devoted to scientific research, scientific journals do not stop searching for large, incredible discoveries, and they are not attracted by the unsophisticated slow pace of development of scientific knowledge.
Eisen emphasizes that this is “how we learned about the bastards”. After returning from the circumnavigation aboard the Beagle, Charles Darwin devoted eight years of hard work to identify, classify, describe and sketch representatives of this group of living organisms. The scientist marked the results of observations and created sketches that are used even by modern marine biologists. But who will do this today? According to Eisen, no one wants to take the initiative, because it requires too much time, effort and hardly your efforts will appreciate. And although Darwin's approach is the basis of true scientific knowledge, the need for scientific journals for sensations compels researchers to carry out more and more new experiments in a hurry.
Eisen adds: “You hope that of the hundreds of tests that have been carried out, at least one will be truly worthwhile. But if you neglect the details, the results do not justify themselves. And when it is obvious in advance that the conclusions will turn out to be false or uninteresting, and you continue to work carelessly, the result is not so trivial. If you did everything accurately, the situation would have looked different ”(And do not forget that it is bright research that makes it into the news bulletins!).
For example, in 2010, scientists announced that they found arsenic in DNA - the results of the experiment immediately appeared in Science. “The report looked tempting,” recalls Eisen. “But everything about him was wrong. Less than an hour since the publication of the project, as found in the work a lot of errors. As a result, scientists tested the theory and it turned out that it is untenable. " This would never have happened, Eisen insists, if the editors were critical of the research findings, turning a blind eye to the unusual assumptions made.
McCown supports the main idea. “Unwittingly, we encouraged such paradoxes,” he states, “and we must create conditions under which quality control will come to the fore.”
Publications for the sake of publications is an approach that assumes by default a number of side effects. Research that does not pretend to the discovery of new phenomena — experiments with a zero result — is just as important for science as the previously unknown conclusions. But political analyst at Stanford University Neal Malhotra emphasizes that studies with significant positive results were 60% more likely to get into print than zero-result experiments. Therefore, McCown sums up: “we gradually come to the conclusion that ... scientists share only substantial data and that the percentage of errors is much higher than the figures appearing in official reports."
In the article for the statistical portal FiveThirtyEight, the author of scientific publications Kristi Ashvanden presented to readers a common tendency to catch facts (p-hacking), which boils down to the fact that researchers use all sorts of social science data to obtain "statically significant", although often erroneous, results. The statistical significance given by a low “p-value” is the probability of obtaining certain conclusions in the event that your hypothesis is false - this is the lucky ticket guaranteeing the publication of the work. As a rule, it is enough that the p-value does not exceed 0.05. And what if it turns out 0.06? Let everything drift, or slightly correct the variables to reduce the data to the specified threshold?
And the point is not that sociologists make mistakes - this practice only indicates that it is very difficult to separate the true facts from extraneous information. Interestingly, one of the articles by Ashvanden was published under the heading "The Problem Is Not in Science: it is simply much more complicated than our ideas about it." Add to this the peculiarities of human nature and the tendency to interpret the data in favor of our convictions - and you will understand the cause of the shortcomings inherent in scientific research.
“If I’m asked to explain why there are so many untenable works in print, I’d refer to this particular feature [propensity to confirm my point of view],” admits Eisen.

This does not mean that scientists cheat - they are just people, like all of us. "I think everything happens at the subconscious level," suggests McCown. “It seems to me that the basis of the kindest intentions is that people simply do not understand what is actually happening.”
Since there are many facts testifying to the tendency to confirm one's point of view, and this is a common psychological phenomenon, according to Eisen, the very structure of science should suppress the slightest such impulses. “When a scientist has an idea that seems to him to be true, the first and foremost task is to prove that this is a mistake ... We have to teach people to think like this. When leading laboratory tests and conducting our own experiments, it is important to remember that we are accustomed to believe in what seems right to us. And methods for evaluating scientific facts should also be based on an understanding of such tendencies. ”
One of the solutions to the problem was sounded in a recent Nature article by MacKown and Nobel Prize-winning physicist from the University of Berkeley Sola Perlmutter. They offered to abstract from personal preferences through "blind analysis." , , . , , , , , , , «, , ».
, - , . , – , , , , . « , , — . – … , , ».
, , , ? , , , , — , – , , .
« , ; », — . , , , , « , , ».
, , , , , . , , « , , ».
According to tradition, a little advertising in the basement, where it does not hurt anyone. VPS, 1 3 . .