Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Can I trust this paper?

Anikin, Andrey LU orcid (2025) In Psychonomic Bulletin & Review
Abstract
After a decade of data falsification scandals and replication failures in psychology and related empirical disciplines, there are urgent calls for open science and structural reform in the publishing industry. In the meantime, however, researchers need to learn how to recognize tell-tale signs of methodological and conceptual shortcomings that make a published claim suspect. I review four key problems and propose simple ways to detect them. First, the study may be fake; if in doubt, inspect the authors’ and journal’s profiles and request to see the raw data to check for inconsistencies. Second, there may be too little data; low precision of effect sizes is a clear warning sign of this. Third, the data may not be analyzed correctly;... (More)
After a decade of data falsification scandals and replication failures in psychology and related empirical disciplines, there are urgent calls for open science and structural reform in the publishing industry. In the meantime, however, researchers need to learn how to recognize tell-tale signs of methodological and conceptual shortcomings that make a published claim suspect. I review four key problems and propose simple ways to detect them. First, the study may be fake; if in doubt, inspect the authors’ and journal’s profiles and request to see the raw data to check for inconsistencies. Second, there may be too little data; low precision of effect sizes is a clear warning sign of this. Third, the data may not be analyzed correctly; excessive flexibility in data analysis can be deduced from signs of data dredging and convoluted post hoc theorizing in the text, while violations of model assumptions can be detected by examining plots of observed data and model predictions. Fourth, the conclusions may not be justified by the data; common issues are inappropriate acceptance of the null hypothesis, biased meta-analyses, over-generalization over unmodeled variance, hidden confounds, and unspecific theoretical predictions. The main takeaways are to verify that the methodology is robust and to distinguish between what the actual results are and what the authors claim these results mean when citing empirical work. Critical evaluation of published evidence is an essential skill to develop as it can prevent researchers from pursuing unproductive avenues and ensure better trustworthiness of science as a whole. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Contribution to journal
publication status
epub
subject
keywords
Research integrity, Replication, Statistics, Power
in
Psychonomic Bulletin & Review
pages
15 pages
publisher
Springer
ISSN
1069-9384
DOI
10.3758/s13423-025-02740-3#citeas
language
English
LU publication?
yes
id
4bba02a6-6848-4e9e-ad2d-02e64ea039a4
date added to LUP
2025-07-17 07:01:29
date last changed
2025-07-18 16:24:33
@article{4bba02a6-6848-4e9e-ad2d-02e64ea039a4,
  abstract     = {{After a decade of data falsification scandals and replication failures in psychology and related empirical disciplines, there are urgent calls for open science and structural reform in the publishing industry. In the meantime, however, researchers need to learn how to recognize tell-tale signs of methodological and conceptual shortcomings that make a published claim suspect. I review four key problems and propose simple ways to detect them. First, the study may be fake; if in doubt, inspect the authors’ and journal’s profiles and request to see the raw data to check for inconsistencies. Second, there may be too little data; low precision of effect sizes is a clear warning sign of this. Third, the data may not be analyzed correctly; excessive flexibility in data analysis can be deduced from signs of data dredging and convoluted post hoc theorizing in the text, while violations of model assumptions can be detected by examining plots of observed data and model predictions. Fourth, the conclusions may not be justified by the data; common issues are inappropriate acceptance of the null hypothesis, biased meta-analyses, over-generalization over unmodeled variance, hidden confounds, and unspecific theoretical predictions. The main takeaways are to verify that the methodology is robust and to distinguish between what the actual results are and what the authors claim these results mean when citing empirical work. Critical evaluation of published evidence is an essential skill to develop as it can prevent researchers from pursuing unproductive avenues and ensure better trustworthiness of science as a whole.}},
  author       = {{Anikin, Andrey}},
  issn         = {{1069-9384}},
  keywords     = {{Research integrity; Replication; Statistics; Power}},
  language     = {{eng}},
  publisher    = {{Springer}},
  series       = {{Psychonomic Bulletin & Review}},
  title        = {{Can I trust this paper?}},
  url          = {{http://dx.doi.org/10.3758/s13423-025-02740-3#citeas}},
  doi          = {{10.3758/s13423-025-02740-3#citeas}},
  year         = {{2025}},
}