
Sometimes it seems surprising that science somehow works. In 2005, the medical community was shaken by the work with the provocative title “Why most published studies are wrong” [Ioannidis, JPA Why most published research findings are false. PLoS Medicine 2, e124 (2005)]. It was written by John Ioannidis, a professor of medicine at Stanford University. She did not show the fallacy of any particular result. She showed that the statistics on successful discoveries did not coincide with how often such discoveries could be expected. As Ioannidis later noted, “many of the results of published research are erroneous or exaggerated, and about 85% of the resources are wasted on research” [Ioannidis, JPA ”. PLoS Medicine 11, e1001747 (2014)].
It is likely that some researchers deliberately adjust the data so that their studies are published. And some problems are uniquely connected with the politics of magazines. But the problems of false discoveries often begin with the fact that researchers undergo self-deception unconsciously: they fall victim to
cognitive distortions , ways of thinking leading us to incorrect, although convenient or attractive conclusions. “Given the percentage of reproducibility of works on psychology and other empirical sciences, we can confidently say that something does not work as it should,” said Susan Fidler, a
behavioral economist from the Social Benefits Research Institute. Max Planck in Bonn. "Cognitive distortions can be one of the reasons for what is happening."
Psychologist Brian Nosek of the University of Virginia says that the most common and problematic distortion in science is “proof with interest”: an interpretation of the results to fit the idea. Psychologists have shown that "most of our reasoning is essentially rationalization," he says. In other words, we have already decided what to do or what to think, and our “explanation” of this decision is simply an excuse for what we wanted to do - or what we wanted to believe. Science, of course, in theory should be more objective and skeptical than domestic reasoning - but how well does she manage to do this?
')
Falsifiability as a criterion of a scientific theory, formulated by the philosopher Karl Popper, claims that a scientist is looking for ways to test and disprove his theories - that is, the answer to the question “Where am I mistaken?” Nosek says that scientists usually ask themselves the question “Where am I right ? (Or, which is the same, "Where are you mistaken?"). When there are facts from which it follows that we may be wrong, we tend to dismiss them as inappropriate. The notorious “
cold nuclear fusion ” episode that took place in the late 1980s, and was studied by electrochemists Martin Fleischmann and Stanley Pons, was full of such an intentional selection of facts. For example, when Fleishman and Pons were told that the energy spectrum of the gamma rays of the reaction they declared had a peak on the wrong energy, they simply pushed it, declaring something about the wrong calibration.
Statistics, it would seem, gives a break from the distortion due to the power of numbers, but it still depresses them. Chris Hartgerink of Tilburg University in the Netherlands is working on the influence of the human factor on collecting statistics. He points out that researchers often attribute false definiteness to random statistical results. “Researchers, like all people, are not good at imagining probabilities,” he says. Although some results should definitely be false-negative - that is, it is incorrect to dismiss something - Hartgerink says that he has never met a job that would describe her discoveries like that. His recent research shows that two of the three works on psychology that report insignificant results may not take false-negatives into account [Hartgerink, CHJ, van Assen, MALM, & Wicherts, J. Too good to be false: Non-Significant results revisited ].
If we consider that science has discovered a huge number of cognitive distortions, the neglect of their consequences in science itself looks rather strange. “In general, I imagined distortions in people,” says Hartgerink, “but when I first learned that they apply to scientists, I was surprised, although it’s so obvious.”
In 1989, Martin Fleischmann and Stanley Pons announced they were getting nuclear fusion at room temperature in a test tube. When they were told that the energy spectrum of the gamma rays of the reaction they declared had a peak on the wrong energy, they simply pushed it.The usual answer to these claims is that even if individual scientists may be deceived, others do not hesitate to criticize their ideas or their results, and so far it does not cause any particular harm - being a public activity, it is self-correcting. Sometimes it is - but this does not necessarily happen quickly enough or smoothly, as one would like to believe.
Nosek believes that expert judgment can sometimes actively interfere with the accurate and rapid verification of scientific statements. He pointed out that when in 2011 the team of Italian physicists announced evidence of the existence of neutrinos moving faster than the speed of light (in violation of Einstein’s special theory of relativity), this surprising statement was made [Antonello, M., et al. Measurement of the velocity with the ICARUS detector at the CNGS beam. preprint arXiv: 1203.3433 (2012)], verified and disproved [Brumfiel, G. Neutrinos not faster than light. Nature News (2012). Retrieved from doi: 10.1038 / nature.2012.10249 / Cho, A. Once Again, Physicists Debunk Faster-Than-Light Neutrinos news.sciencemag.org (2012)] so quickly thanks to an efficient system of high-energy physics that allows you to distribute preprints of work through open access repository. If these checks relied on the usual peer review, they would have taken years.
Similarly, when researchers stated in an article in the journal Science of 2010 that some microbe had arsenic in DNA, it could replace phosphorus - and such a statement would rewrite the fundamental chemical principles of life - one of the researchers who conducted subsequent research in attempts to reproduce this discovery , considered it important to document the results obtained in the open access blog. This went against the practices of the original team of scientists who were criticized for the lack of evidence supporting their controversial statement [Hayden, EC Open research casts doubt on arsenic life. Nature News (2011). Retrieved from doi: 10.1038 / news 2011.469].
The peer review tool seems to be prone to errors — especially in areas such as medicine and psychology — more often than it seems, as the growing “reproducibility crisis” says. Medical reporter Ivan Oransky and scientific editor Adam Marcus, who lead Retraction Watch service,
say this : “When science works as intended, subsequent discoveries complement, alter, or completely undermine early research. The problem is that in science — or, more precisely, in scientific publications — this process rarely works as intended. Most articles published in scientific journals have little chance of being confirmed if they decide to double-check the experiment in another laboratory. ”
One of the reasons for the distortions in the scientific literature is that journals are more willing to publish positive, rather than negative, results: it is easier to confirm something than to refute. Journal reviewers tend to discard negative results as boring, and researchers gain little reputation or status among investors or heads of institutions, presenting such findings. “If you conduct 20 experiments, one of them is likely to produce results that are suitable for publication,” wrote Orange and Marcus. “But simply publishing the results does not confirm them.” In fact, quite the opposite. ”[Oransky, I. Unlike a Rolling Stone: Is it a Really Better Journal of Self-Correction?
www.iflscience.com (2015)].
Oransky believes that although, in principle, all scientific rewards reinforce the
propensity to confirm their point of view [confirmation bias], the urgent need for publications is among the most problematic rewards. “To earn practice, grants, status, scientists often have to be published in major journals,” he says. “It stimulates positive and“ breakthrough ”discoveries, since they are the ones who can earn quotes and influence. Therefore, there is nothing surprising in the fact that scientists are deceiving themselves, trying to see the ideal revolutionary results in their experimental data. ”
Nosek agrees, saying that one of the most distorting influences is the reward system, which awards approving reviews, scientific practice and grants. “To advance in my career, I need to publish as often as possible, and in the most significant publications. This means that I need to issue articles with a high probability of publication. ” And this, in his words, articles that produce positive results (“I discovered” and not “I refuted”), original results (never “We confirmed the previous discovery”), and pure results (“We have demonstrated that ...” and not "It is unclear how to interpret these results"). But "most of what is happening in the laboratory does not look that way," says Nosek - it looks more like a jumble. “And how to get from the hodgepodge to beautiful results? - he asks. - You can be patient, I can get lucky - or you can go the easiest way, taking, often unconsciously, decisions about what data to choose and how to analyze them, so that a neat story could grow out of them. But in that case I will definitely allow distortions in my reasoning. ”
Brian NosekThe point is not only that poor-quality data and erroneous ideas can survive, but also that good ideas can be overwhelmed by evidence with interest and career requirements. The assumptions made by geneticist
Barbara McClintock in the 1940s and '50s that some DNA sequences might “jump” on chromosomes, and biochemist
Stanley Prusiner , that
prions might not fold as they should , and that this improper coagulation can be transferred from one protein to another, so strongly diverged from the prevailing orthodox views that both researchers were cruelly ridiculed - until their studies were confirmed, and they did not receive the Nobel Prize. Skepticism about bold statements is always justified, but, looking back, one can see that it is sometimes due to the inability to avoid distortions caused by the dominant picture of the world, and not because of sincere doubts about the quality of evidence. Examples of McQintock and Pruziner show that science does not undergo self-correction when the load of evidence requires it, says Nosek, but “we don’t know examples in which a similar idea appeared, but then it was swept aside and forgotten.”
Scientists, of course, know about this phenomenon. Many are inclined to the
theory of the philosopher
Thomas Kuhn that science is experiencing abrupt changes in paradigms, in which all knowledge of the whole field is undermined and a completely new picture appears. And between such jumps, we see only “ordinary science” that fits into a general consensus - until the accumulated anomalies create enough pressure to break through and create a new paradigm. A classic example is the appearance of quantum physics at the beginning of the 20th century. The same model fits the idea of the XVIII century about phlogiston in chemistry - the supposed “fiery substance”, refuted by Lavoisier's theory of oxygen. The famous quote attributed to Max Planck speaks about another method of overcoming prejudices in science: "Science moves from one funeral to another." New ideas break through only after the death of the old sentries.
The role of distortion in science became clear to Nosek when he was a graduate student in psychology. “Like many graduate students, my idealism about the work of science was shaken when I began to study research methods,” he says. “In that class, we read a bunch of works that were already obsolete — articles from the 1950s to the 1970s — articles about distortions related to publications, lack of reproducibility, incomplete description of the methodology in published articles, lack of access to original data and propensity to negate zero results. "
Since then, Nosek has devoted himself to the advancement of science [Ioannidis, JPA, Munafo, MR, Fusar-Poli, P., Nosek, BA, & David, SP]. Trends in Cognitive Sciences 18, 235-241 (2014)]. He is convinced that the process and progress of science will go more smoothly if you bring these distortions to clean water - and this means, to make the research more transparent in terms of their methods, assumptions and interpretations. “Fighting these problems is not easy, because they lie in the cultural field — and no one can change culture,” he says. “So I started with a controllable problem: with the power of my research project.”
Interestingly, Nosek believes that one of the most effective solutions to the problem of cognitive distortion in science may come from a field that has recently been subjected to harsh criticism for a huge number of errors and delusions: pharmacology. From the point of view of Nosek, it is precisely because such problems are clearly manifested in the pharmaceutical industry that this community has adapted better than others to solve them. For example, due to the well-known inclination of pharmaceutical companies and their partners to report positive research results and to put down negatives on the brakes, in the USA it is now required by law to register all clinical trials before they begin. This obliges researchers to report results with any outcome.
Nosek has organized a similar scheme for pre-registration of studies called the “open science platform” [Open Science Framework, OSF]. He had been preparing for it for many years, but it only started when former software developer Jeff Spies joined his laboratory in 2009-2010 and started working on it as a project for a thesis. “Many people joined the project, and it quickly grew to a large size,” says Nosek. “We launched the
OSF website , and the community and investors gathered around it.” Nosek and Spice co-founded
the Open Science Center in Charlottesville in 2013, which now manages OSF and can provide its services for free.
The idea is, Nosek says, that the researchers "should write down in advance what they are studying and what they think should happen." Then, when they conduct experiments, they agree to analyze the results strictly within the framework of the original plan. It sounds elementary, similar to how the work of science is told to children. So it is - but in fact it happens very rarely. Instead, as Fiedler admits, the analysis is based on undescribed and usually subconscious assumptions about what results will or will not be obtained. Nosek says that researchers who used OSF often wondered at the stage of obtaining results how much their project had moved away from the original goals set by them.
Fiedler used this service and said that he not only preserves the integrity of the study, but also contributes to its smoother flow. “Pre-registration with OSF makes you think over all the details in advance, and the project, as well as part of its description, is already ready before I start collecting data,” she says. “Awareness of this helps me to divide the results into those that I trust more and those that do less.” And it is not alone: the transparency of the process “gives any researcher a chance to judge whether the result of the valuable time spent on him is worthy of a researcher.”
Postulating your goals is a good way to check that you know what those goals really are, says Hartgerink, who also uses OSF. “Having decided to do this, we noticed that explaining a hypothesis is a difficult task in itself,” and this is a sign that they were not really clearly formulated. “Pre-registration is almost mandatory if you need to test a hypothesis,” he concludes. Fiedler says that over the past year she and all her students have used the OSF scheme. “I learned so much by participating in this project that I can only recommend it to everyone in our area of work,” she says.
The difference between OSF and how things are usually done is quite significant, says Hartgerink. Since most researchers write scientific work only after conducting a study, hypotheses are not written explicitly in advance. "This leads to more convenient formulations of hypotheses after receiving the results." The psychologist Ernest O'Boyle from the University of Iowa and his colleagues called this tendency to embellish the results in retrospect by the “pupa effect”. One consequence, according to Hartgerinka, is that unexpected results are often presented as expected. “Ask any man in the street if this is the right thing to do, and he will say that this is wrong. But in science it is customary to do this for a long time. ”
Often this shift in hypotheses and goals occurs without explicit intent and even without awareness. “In the process, sometimes taking a very long time, to develop an experiment, collect data, analyze it, present the results to scientific colleagues, our view of the original question and the corresponding results evolve,” says Fiedler. “In the process, we can forget about the initial failed tests and present new ideas as answers to other questions based on the same data.” This approach to science is valuable enough, she says: it is important to detect unforeseen connections. But this not only shifts the objectives of the study, but can also lead to "an excessively strong faith of researchers to effects, possibly of a false nature." OSF forces researchers to leave targets in place.
But if you decide to limit yourself to a narrow set of goals before the experiments, will you reject the potentially fruitful ways that you could not foresee? Perhaps, Nosek says, but “learning from data” cannot be called a good way to build reliable conclusions. “We are currently mixing survey and supporting studies,” he says. “The fact that they always forget is that you cannot create hypotheses and test them on the basis of the same data.” If you find a new, interesting way, you need to follow it separately, and not tell yourself that the experiment was dedicated to him from the very beginning. "Fiedler argues with the pre-registration charge of killing creativity and freedom. “It’s not necessary to do it to everyone and always,” she says, and exploratory research that collects data without a specific goal or hypothesis still has its place. But we need to understand the differences and approaches.The main obstacle, according to Hartgerinka, is education. Researchers are simply not advised to do everything this way. But it would be better for them to master this approach. “If new researchers do not begin to apply these approaches now,” he says, “in ten years they may find themselves on the bench, as now conducting research in a reproducible, transparent and open way is already becoming the norm.”In the future, Nosek imagines a “scientific utopia” in which science becomes a much more efficient way of accumulating knowledge. Nobody says that OSF will be a panacea leading us there. As Oransky said: “One of the most difficult problems is to make scientists stop deceiving themselves. This requires the removal of evidence with interest and a propensity to confirm their point of view, and I have not found any good solutions to these problems. ” So, Nosek believes that besides OSF, an open access publication and open and continuous peer review should be the necessary restrictions. We may not be able to get rid of our distortions, but we can muffle their alluring call. As Nosek said, along with his colleague, psychologist Yoav Bar-Anan from Israel University. Ben-Gurion: “Critical barrierssubject to change are not in the technical or financial sphere, but in the social field. And although scientists retain the current state of affairs, it is in their power to change it. ”Philip Bol is the author of the book The Invisible: The Dangerous Charm of the Unnoticeable, and many other books on science and art.