2013 : WHAT *SHOULD* WE BE WORRIED ABOUT? [1]

marco_iacoboni's picture [5]
Neuroscientist; Professor of Psychiatry & Biobehavioral Sciences, David Geffen School of Medicine, UCLA; Author, Mirroring People
Science Publishing

We should be worried about science publishing. When I say science publishing, I am really thinking about the peer-reviewed life science and biomedical literature. We should be worried about it because it seems that the only publishable data in life science and biomedical literature are novel findings. I think that's a serious problem because one of the crucial aspects of science is reproducibility of results. The problem in life science is that if you replicate an experiment and its results, no one wants to publish your replication data. "We know that already", is the typical response. Even when your experiment is not really a replication, but it does resemble a previously published one, and your results aren't even exactly identical to previously published ones but they are close enough, unless you find a way of discussing your data under a new light, nobody wants to see your study published. Only experiments that produced seemingly opposite results of previously published studies get a chance to be published. Here, the lack of replication makes the experiment interesting.

The other big problem is that experiments that produce negative findings or 'null results', that is, that do not demonstrate any experimental effect, are also difficult to publish (unless they show lack of replication of a previously published, important finding).

These two practices combined make it very difficult to figure out—on the basis of the literature alone—which results are solid and replicable and which results are not. And that's clearly a problem.

Some have argued that to fix this problem we should publish all our negative results and publish positive results only after replicating them ourselves. I think that's a great idea, although I don't see the life science and biomedical community embrace this idea anytime soon. But let me give you some practical examples as to why things are rather messed up in the life science and biomedical literature and how they could be fixed.

One of the most exciting recent developments in human neuroscience is what is called 'non invasive neuromodulation.' It consists of a number of techniques using either magnetic fields or low currents to stimulate the human brain painlessly and with no or negligible side effects. One of these techniques has been already approved by FDA to treat depression. Other potential uses include reducing seizures in epileptic patients, improving recovery of function after brain damage, and in principle even improving cognitive capacities in healthy subjects.

In my lab, we are doing a number of experiments using neuromodulation, including two studies in which we stimulate two specific brain sites of the frontal lobe to improve empathy and reduce social prejudice. Every experiment has a rationale that is obviously based on previous studies and theories inspired by those studies. Our experiment on empathy is based mostly on our previous work on mirror neurons and empathy. Having done a number of studies ourselves, we are pretty confident about the background on which we base the rationale for our experiment. The experiment on social prejudice, however, is inspired by a clever paper recently published by another group using also neuromodulation of the frontal lobe. The cognitive task used in that study shares similarities with the cognitive mechanisms of social prejudice. However, here is the catch: We know about that published paper (because it was published!) but we have no idea whether a number of groups attempted to do something similar and failed to get any effect, simply because a negative finding does not get published. We also can't possibly know how replicable is the study that inspires our experiment, because replication studies don't get easily published. In other words, we have way more unknowns that we would like to have.

Publishing replications and negative findings would make it much easier to know what is empirically solid and what is not. If twenty labs perform the same experiment, and 18 get no experimental effects, while the remaining two get contrasting effects, and all these studies are published, then you know there isn't much to be pursued in that line of research by simply reading the literature. But if 14 labs get the same effect, three get no effect, and three get the opposite effect, it is likely that the effect demonstrated by fourteen labs is much more solid than the effect demonstrated by three labs.

In the current publishing system, it is complicated to achieve these conclusions. One way of doing it, is to pool together experiments that share a number of features. For instance, our group and others have investigated mirror neurons in autism and concluded that mirror neuron activity is reduced in autism. Some other groups failed to demonstrate it. The studies that show mirror neuron impairment in autism largely outnumber the studies that fail to show it. In this case, it is reasonable to draw solid conclusions from the scientific literature. In many other cases, however, as in the example of neuromodulation of the frontal lobe and social prejudice, there is much uncertainty due to the selectivity regarding what gets published and what is left out.

The simplest way to fix this problem is to evaluate whether a study should be published or not only on the basis of the soundness of its experimental design, data collection and analysis. If the experiment is well done, it should be published, whether it is a replication or not, no matter what kind of results it shows. A minority in the life science and biomedical community are finally voicing this alternative to the current dominant practices in scientific publishing. If this minority eventually becomes a majority, we can finally have a scientific literature that can be evaluated in quantitative terms (x number of studies show this, while y number of studies show that), rather than in qualitative terms (this study shows x, but that study shows y). This approach should finally make it even more difficult for irrational claims (denial of evolution or climate change, the most dramatic examples that come to mind) to pretend to be 'scientific'. It would also limit the number of 'controversies' in life science to those issues that are truly unclear, saving all of us time that we spend arguing on things that should have been settled already by the empirical data.