7 minute read

A new research system wanted.

Wishing for new research system

Half of all research published in the leading scientific journals is not reproducible and thus incorrect. Does this mean that the research is substandard? Or is this entirely as it should be? We should be better at having the courage to fail, several researchers at the University of Gothenburg think.

In the book Kris i forskningsfrågan: eller vad fan får vi för pengarna? (Crisis in research: or what the hell are we getting for our money?) that was published a couple of years ago, the journalist and social commentator Hanne Kjöller examines how well taxpayers' research money is managed by politicians, authorities, universities and the researchers themselves. Her conclusion is, of course, that it is not managed very well. In the current climate, as the trial of Paolo Macchiarini is coming to an end, the book is relevant once again. It was the infamous surgeon's use of humans as guinea pigs that made Kjöller wonder how our research system works. How was it even possible that he was granted research funding? In the book, Kjöller focuses on a number of themes that are well-known within the research community but largely unknown to the public, and the one that received the most media attention was about the so-called “reproduction crisis”. That's completely understandable. Kjöller quotes John Ioannidis at Stanford University who in one of the world's most quoted articles (2005) showed that half of the research results published in the most prestigious journals are incorrect, as they cannot be replicated when the experiments are repeated. Furthermore,

The way in which research is conducted should be fundamentally changed.

EVA RANEHILL

he estimates that 85 percent of all the billions invested in medical studies, clinical trials as well as in other research, is wasted. In the world of media logic, that kind of thing makes a cracking story.

But does that mean that science is in a crisis? – It is an overly sweeping statement, but it is clear that the low degree of reproducibility that has been discovered in certain subjects is a problem, says Olle Häggström, Professor of Mathematical Statistics.

But he does not think it's mostly due to cheating, but more about carelessness and incompetence, and overly naive interpretations of the results of statical analyses. – But psychologists, doctors and economists have paid attention to the problem, and quite a lot is currently being done to get to grips with it, says Olle Häggström.

But Eva Ranehill, Senior Lecturer at the Department of Economics, thinks that more should be done. – The way in which research is conducted should be fundamentally changed. Greater emphasis must be placed on the robustness of statistics and sufficiently large samples. And the main hypotheses should not be changed during the course of the research study, but remain the same when you publish. In the same way, it should be made clear that the results of any exploratory analyses of one's data, which are statistically more uncertain, need to be examined in more detail in further studies.

She herself became abruptly aware of the problem about ten years ago. As a postdoctoral fellow in behavioural economics, she led a research group that would replicate a study of body postures and power. The study was well known, in a TED talk with millions of views, the social psychologist who carried out the research explains how people by stretching or adopting a slumped posture for a few minutes can affect their testosterone and cortisol levels. – We thought it was interesting and wanted to develop the study in our own research.

The experimental group in the original study consisted of 42 people, Ranehill increased the number to 200. The study was not replicable.

In her book, Hanne Kjöller uses the body posture study as a blatant example of lousy research that has received a lot

Eva Ranehill believes there is a pressure on researchers to publish interesting results.

of attention, and where the researcher behind it was not entirely honest about their statistical selection. And this may be true. But on the other hand, the system worked – the shortcomings in the research were revealed.

– I'm not worried at all. It is not easy to manipulate the system. Anyone who tries to do this, by over-interpreting data, for example, is swiftly caught red-handed. No research is interesting until it has been replicated by someone else achieving the same results, says Henrik Zetterberg, Professor at the Institute for Neuroscience and Physiology.

Sure, he thinks it's a little sad that replicability is so low for the research in the leading journals, but why worry about them? – If you want to be misled and get a skewed picture of what a research field looks like, you should only follow high-impact journals. They need to sell ads, and in order to maintain their impact factor, it is important to publish research that receives a lot of attention, and which often speculatively suggests how something important actually 

Hanna Kjöller has written the book Kris i forskningsfrågan.

– Researchers should have the courage to fail, says Henrik Zetterberg.

 works. This means that, for example, it is impossible for me to publish a study in such a journal about a biomarker that was thought to be able to detect Alzheimer's disease but which does not.

Instead, it ends up in one of the low-impact journals run by patient associations, for example. – It is in these journals that the most important research for the research field and the patients is published.

Maybe researchers need to get better at having the courage to fail? Precisely, Henrik Zetterberg answers. – One should not be so judgemental about non-replicable results. If everything always had to be absolutely perfect before publishing, the field would come to a standstill, which leads to an even worse bias where people don’t dare to present findings they do not really understand, but which are interesting, he says and continues: – This is something that makes doctoral students anxious, they are terrified of making mistakes. But if a researcher is always right during his or her career… well, then something is not right.

Eva Ranehill says that she has managed to get an unusual number of “null hypotheses” published. – I've been lucky. The example in Hanne Kjöller's book was such a big thing because the first results that we tried to replicate received such an incredible amount of media attention, she says.

But as a rule, that is not the case.

– At present, it is often difficult to cut through the noise with information that shows that an effect, as presented in a published article, does not in fact exist. Depending on the importance of the original result, that knowledge may not always be disseminated in the leading journals, but we must find a system for it so that we do not try to replicate each other’s research without anyone knowing about it. We need to quickly find out when results from previous research are not accurate.

But of course, this also requires that researchers write articles about their research studies even when they do not find what they wanted to find. It does not really chime with the incentives structure that permeates academia today. – Hanne Kjöller claims that it is about research money, but I do not think that is the main reason. It's about jobs. As a young researcher, you have to publish a certain number of articles to keep your job, and those who want to advance must also publish in the top-ranked journals. Researchers thus have incentives to arrive at results and connections that are publishable. And the journals want articles that are considered interesting and provide a lot of quotes. A null hypothesis is often considered less interesting, she says.

– If you get a null hypothesis without having specified exactly which statistical analyses you want to perform in a research plan, well then it is easy to run your data against more variables. The more variables you add, the greater the probability that you will find something you think is statistically significant. This is happening unintentionally to a great extent, I think. You just want to make the best of your material, but at the same time you increase the risk of producing statistically uncertain results.

The question is whether the research is stuck in the system? No, Eva Ranehill does not think so. But a more comprehensive debate is needed to find solutions. Because they exist. – There are, for example, journals that accept articles after evaluating research plans instead of results. This means that researchers can write down their best plan and imagined analysis and then not have to worry about what they come up with.

If everything always had to be absolutely perfect before publishing, the field would come to

a standstill … HENRIK ZETTERBERG

Text: Lars Nicklason Photo: Johan Wingborg

This article is from: