
2 minute read
Ten common statistical mistakes to watch out for when writing or reviewing a manuscript
Ten common statistical mistakes to watch out for when writing or reviewing a manuscript
REVIEWED BY Glenda McLean, FASA | ASA SIG: Research
REFERENCE | Authors: Makin T & Orban de Xivry JJ
WHY THE STUDY WAS PERFORMED
There has been a lot written about improving the reproducibility of research and improving statistical analysis techniques. This article was written to address common statistical oversights and provide peer reviewers and publishers with a tool to help them identify common problems in research manuscripts.
HOW THE STUDY WAS PERFORMED
The list was developed at the journal club of the London Plasticity Lab where papers are discussed. The issues in the list are relevant to any scientific discipline that uses statistics to assess findings. Each mistake in the list discusses how the mistake can arise, explains how it can be detected by readers and offers a solution to the problem.
A list of some of the most common statistical mistakes that appear in the scientific literature.
WHAT THE STUDY FOUND
The list of ten common statistical mistakes includes:
absence of an adequate control condition/group
interpreting comparisons between two effects without directly comparing them
inflating the units of analysis
spurious correlations
use of small samples
circular analysis
flexibility of analysis: p-hacking
failing to correct for multiple comparisons
overinterpreting nonsignificant results
correlation and causation.
Item 5 is summarised as follows:
The use of small samples poses significant challenges. Small samples can only detect large effects, often leading to an overestimation of the actual effect size. Additionally, small samples are prone to missing existing effects due to insufficient statistical power. Larger sample sizes increase the likelihood of detecting true effects, enhancing statistical power.
Small samples also complicate testing the assumption of normality, as the sample distribution may deviate from normality. Reviewers must critically examine the sample size to assess whether the study’s claims are reasonable.
Researchers should present evidence that their study is sufficiently powered. If the sample size is limited, researchers must justify this limitation and demonstrate efforts to mitigate its impact.
RELEVANCE TO CLINICAL PRACTICE
Reviewers of journal articles should carefully examine the experimental design and statistical analysis of all manuscripts. This article addresses ten common mistakes that frequently appear in journals. By considering these factors, readers can better understand how research conclusions are derived.