3 minute read

Ten common statistical mistakes to watch out for when writing or reviewing a manuscript

Ten common statistical mistakes to watch out for when writing or reviewing a manuscript

REVIEWED BY Helen Beets | ASA SIG: Research

REFERENCE | Journal: eLife sciences

WHY THE STUDY WAS PERFORMED

After discussion at the Journal Club of the London Plasticity Lab, it was concluded that a lot of research is based around the ideals of reproducibility and how to improve statistical training and learning. Few papers are written addressing the errors and offering practical solutions that can be used as a guide for researchers when planning and developing their own research. Additionally, it was thought that peer-reviewers needed more guidance when assessing manuscripts for mistakes in the peer reviewers process.

Whilst there have been papers in the literature outlining individual research errors, the authors of this review wanted to highlight the most common or critical ones, and list them in one paper to be used as a guide. They felt that much attention had been directed towards non-statistical errors, such as ethics and methodology, but more scrutiny and debate on statistical oversights is needed.

“Mistakes have their origins in ineffective experimental designs, inappropriate analyses and/or flawed reasoning.”
HOW THE STUDY WAS PERFORMED

This was a review of papers in neuroscience, psychology, clinical and bioengineering journals related to statistical error and common mistakes in statistical planning. Although the papers were predominantly from the neuroscience area, they were considered relevant because statistical analysis is common across medical sciences, which use similar methods to yield results that can be interpreted and discussed. Mainstream mistakes were included in the review and compiled to make a list of the 10 most common errors.

WHAT THE STUDY FOUND

The 10 most common mistakes that were discovered and reviewed were listed as:

  1. absence of an adequate control/group

  2. interpreting comparisons between two effects without directly comparing them

  3. inflating the unit of analysis

  4. spurious correlations

  5. use of small samples

  6. circular analysis

  7. flexibility of analysis: p-hacking

  8. failing to correct for multiple comparisons

  9. over-interpreting non-significant results

  10. correlation and causation.

For each category of error, they listed the problem, how the mistake can arise, explained how it can be detected by authors and a solution offered. Many of the mistakes were discovered to be interdependent, which meant that one can affect/create another. Each section was then given a suggested article for further reading.

The majority of these provided errors and solutions were in relation to the p-value analysis; however, this meant that the key assumption was that the p-value is the most important part of the results.

RELEVANCE TO CLINICAL PRACTICE

The primary intended use of this paper is as a guide for researchers when planning and designing their own statistical methods. Whilst offering their own solutions, they recognise that there is more than one way to solve each problem.

The authors recommend readers to access the online web version of this review and to contribute using the annotation function. This allows for open discussion by readers and an opportunity to offer further solutions, allowing other readers to benefit.

They believe that one of the best solutions to preventing errors in clinical practice is to publish peer-reviewed research without statistical error.

This article is from: