2 minute read

Ten common statistical mistakes to watch out for when writing or reviewing a manuscript

Ten common statistical mistakes to watch out for when writing or reviewing a manuscript

REVIEWED BY Glenda McLean, FASA | ASA SIG: Research

REFERENCE | Authors: Makin T & Orban de Xivry JJ

WHY THE STUDY WAS PERFORMED

There has been a lot written about improving the reproducibility of research and improving statistical analysis techniques. This article was written to address common statistical oversights and provide peer reviewers and publishers with a tool to help them identify common problems in research manuscripts.

HOW THE STUDY WAS PERFORMED

The list was developed at the journal club of the London Plasticity Lab where papers are discussed. The issues in the list are relevant to any scientific discipline that uses statistics to assess findings. Each mistake in the list discusses how the mistake can arise, explains how it can be detected by readers and offers a solution to the problem.

A list of some of the most common statistical mistakes that appear in the scientific literature.
WHAT THE STUDY FOUND

The list of ten common statistical mistakes includes:

  1. absence of an adequate control condition/group

  2. interpreting comparisons between two effects without directly comparing them

  3. inflating the units of analysis

  4. spurious correlations

  5. use of small samples

  6. circular analysis

  7. flexibility of analysis: p-hacking

  8. failing to correct for multiple comparisons

  9. overinterpreting nonsignificant results

  10. correlation and causation.

Item 5 is summarised as follows:

The use of small samples poses significant challenges. Small samples can only detect large effects, often leading to an overestimation of the actual effect size. Additionally, small samples are prone to missing existing effects due to insufficient statistical power. Larger sample sizes increase the likelihood of detecting true effects, enhancing statistical power.

Small samples also complicate testing the assumption of normality, as the sample distribution may deviate from normality. Reviewers must critically examine the sample size to assess whether the study’s claims are reasonable.

Researchers should present evidence that their study is sufficiently powered. If the sample size is limited, researchers must justify this limitation and demonstrate efforts to mitigate its impact.

RELEVANCE TO CLINICAL PRACTICE

Reviewers of journal articles should carefully examine the experimental design and statistical analysis of all manuscripts. This article addresses ten common mistakes that frequently appear in journals. By considering these factors, readers can better understand how research conclusions are derived.

This article is from: