Assessing the feasibility and impact of clinical trial trustworthiness checks via an application to Cochrane Reviews : Stage 2 of the INSPECT-SR project
(2025) In Journal of Clinical Epidemiology 184.- Abstract
BACKGROUND AND OBJECTIVES: The aim of the INveStigating ProblEmatic Clinical Trials in Systematic Reviews (INSPECT-SR) project is to develop a tool to identify problematic RCTs in systematic reviews. In stage 1 of the project, a list of potential trustworthiness checks was created. The checks on this list must be evaluated to determine which should be included in the INSPECT-SR tool.
METHODS: We attempted to apply 72 trustworthiness checks to randomized controlled trials (RCTs) in 50 Cochrane reviews. For each, we recorded whether the check was passed, failed, or possibly failed or whether it was not feasible to complete the check. Following application of the checks, we recorded whether we had concerns about the authenticity of... (More)
BACKGROUND AND OBJECTIVES: The aim of the INveStigating ProblEmatic Clinical Trials in Systematic Reviews (INSPECT-SR) project is to develop a tool to identify problematic RCTs in systematic reviews. In stage 1 of the project, a list of potential trustworthiness checks was created. The checks on this list must be evaluated to determine which should be included in the INSPECT-SR tool.
METHODS: We attempted to apply 72 trustworthiness checks to randomized controlled trials (RCTs) in 50 Cochrane reviews. For each, we recorded whether the check was passed, failed, or possibly failed or whether it was not feasible to complete the check. Following application of the checks, we recorded whether we had concerns about the authenticity of each RCT. We repeated each meta-analysis after removing RCTs flagged by each check and again after removing RCTs where we had concerns about authenticity to estimate the impact of trustworthiness assessment. Trustworthiness assessments were compared to Risk of Bias and Grading of Recommendations Assessment, Development and Evaluation (GRADE) assessments in the reviews.
RESULTS: Ninety-five RCTs were assessed. Following application of the checks, assessors had some or serious concerns about the authenticity of 25% and 6% of the RCTs, respectively. Removing RCTs with either some or serious concerns resulted in 22% of meta-analyses having no remaining RCTs. However, many checks proved difficult to understand or implement, which may have led to unwarranted skepticism in some instances. Furthermore, we restricted assessment to meta-analyses with no more than five RCTs (54% contained only 1 RCT), which will distort the impact on results. No relationship was identified between trustworthiness assessment and Risk of Bias or GRADE.
CONCLUSION: This study supports the case for routine trustworthiness assessment in systematic reviews, as problematic studies do not appear to be flagged by Risk of Bias assessment. The study produced evidence on the feasibility and impact of trustworthiness checks. These results will be used, in conjunction with those from a subsequent Delphi process, to determine which checks should be included in the INSPECT-SR tool.
PLAIN LANGUAGE SUMMARY: Systematic reviews collate evidence from randomized controlled trials (RCTs) to find out whether health interventions are safe and effective. However, it is now recognized that the findings of some RCTs are not genuine, and some of these studies appear to have been fabricated. Various checks for these "problematic" RCTs have been proposed, but it is necessary to evaluate these checks to find out which are useful and which are feasible. We applied a comprehensive list of "trustworthiness checks" to 95 RCTs in 50 systematic reviews to learn more about them and to see how often performing the checks would lead us to classify RCTs as being potentially inauthentic. We found that applying the checks led to concerns about the authenticity of around 1 in three RCTs. However, we found that many of the checks were difficult to perform and could have been misinterpreted. This might have led us to be overly skeptical in some cases. The findings from this study will be used, alongside other evidence, to decide which of these checks should be performed routinely to try to identify problematic RCTs, to stop them from being mistaken for genuine studies and potentially being used to inform health care decisions.
(Less)
- author
- organization
- publishing date
- 2025-05-09
- type
- Contribution to journal
- publication status
- epub
- subject
- in
- Journal of Clinical Epidemiology
- volume
- 184
- article number
- 111824
- publisher
- Elsevier
- external identifiers
-
- scopus:105008771637
- pmid:40349737
- ISSN
- 0895-4356
- DOI
- 10.1016/j.jclinepi.2025.111824
- language
- English
- LU publication?
- yes
- additional info
- Copyright © 2025 The Author(s). Published by Elsevier Inc. All rights reserved.
- id
- 5ec7d0fd-bd52-4ffb-bd83-398b2fc06590
- date added to LUP
- 2025-08-11 13:00:15
- date last changed
- 2025-08-12 04:12:22
@article{5ec7d0fd-bd52-4ffb-bd83-398b2fc06590, abstract = {{<p>BACKGROUND AND OBJECTIVES: The aim of the INveStigating ProblEmatic Clinical Trials in Systematic Reviews (INSPECT-SR) project is to develop a tool to identify problematic RCTs in systematic reviews. In stage 1 of the project, a list of potential trustworthiness checks was created. The checks on this list must be evaluated to determine which should be included in the INSPECT-SR tool.</p><p>METHODS: We attempted to apply 72 trustworthiness checks to randomized controlled trials (RCTs) in 50 Cochrane reviews. For each, we recorded whether the check was passed, failed, or possibly failed or whether it was not feasible to complete the check. Following application of the checks, we recorded whether we had concerns about the authenticity of each RCT. We repeated each meta-analysis after removing RCTs flagged by each check and again after removing RCTs where we had concerns about authenticity to estimate the impact of trustworthiness assessment. Trustworthiness assessments were compared to Risk of Bias and Grading of Recommendations Assessment, Development and Evaluation (GRADE) assessments in the reviews.</p><p>RESULTS: Ninety-five RCTs were assessed. Following application of the checks, assessors had some or serious concerns about the authenticity of 25% and 6% of the RCTs, respectively. Removing RCTs with either some or serious concerns resulted in 22% of meta-analyses having no remaining RCTs. However, many checks proved difficult to understand or implement, which may have led to unwarranted skepticism in some instances. Furthermore, we restricted assessment to meta-analyses with no more than five RCTs (54% contained only 1 RCT), which will distort the impact on results. No relationship was identified between trustworthiness assessment and Risk of Bias or GRADE.</p><p>CONCLUSION: This study supports the case for routine trustworthiness assessment in systematic reviews, as problematic studies do not appear to be flagged by Risk of Bias assessment. The study produced evidence on the feasibility and impact of trustworthiness checks. These results will be used, in conjunction with those from a subsequent Delphi process, to determine which checks should be included in the INSPECT-SR tool.</p><p>PLAIN LANGUAGE SUMMARY: Systematic reviews collate evidence from randomized controlled trials (RCTs) to find out whether health interventions are safe and effective. However, it is now recognized that the findings of some RCTs are not genuine, and some of these studies appear to have been fabricated. Various checks for these "problematic" RCTs have been proposed, but it is necessary to evaluate these checks to find out which are useful and which are feasible. We applied a comprehensive list of "trustworthiness checks" to 95 RCTs in 50 systematic reviews to learn more about them and to see how often performing the checks would lead us to classify RCTs as being potentially inauthentic. We found that applying the checks led to concerns about the authenticity of around 1 in three RCTs. However, we found that many of the checks were difficult to perform and could have been misinterpreted. This might have led us to be overly skeptical in some cases. The findings from this study will be used, alongside other evidence, to decide which of these checks should be performed routinely to try to identify problematic RCTs, to stop them from being mistaken for genuine studies and potentially being used to inform health care decisions.</p>}}, author = {{Wilkinson, Jack and Heal, Calvin and Antoniou, Georgios A and Flemyng, Ella and Ahnström, Love and Alteri, Alessandra and Avenell, Alison and Barker, Timothy Hugh and Borg, David N and Brown, Nicholas J L and Buhmann, Rob and Calvache, Jose A and Carlsson, Rickard and Carter, Lesley-Anne and Cashin, Aidan G and Cotterill, Sarah and Färnqvist, Kenneth and Ferraro, Michael C and Grohmann, Steph and Gurrin, Lyle C and Hayden, Jill A and Hunter, Kylie E and Hyltse, Natalie and Jung, Lukas and Krishan, Ashma and Laporte, Silvy and Lasserson, Toby J and Laursen, David R T and Lensen, Sarah and Li, Wentao and Li, Tianjing and Liu, Jianping and Locher, Clara and Lu, Zewen and Lundh, Andreas and Marsden, Antonia and Meyerowitz-Katz, Gideon and Mol, Ben W and Munn, Zachary and Naudet, Florian and Nunan, David and O'Connell, Neil E and Olsson, Natasha and Parker, Lisa and Patetsini, Eleftheria and Redman, Barbara and Rhodes, Sarah and Richardson, Rachel and Ringsten, Martin and Rogozińska, Ewelina and Seidler, Anna Lene and Sheldrick, Kyle and Stocking, Katie and Sydenham, Emma and Thomas, Hugh and Tsokani, Sofia and Vinatier, Constant and Vorland, Colby J and Wang, Rui and Al Wattar, Bassel H and Weber, Florencia and Weibel, Stephanie and van Wely, Madelon and Xu, Chang and Bero, Lisa and Kirkham, Jamie J}}, issn = {{0895-4356}}, language = {{eng}}, month = {{05}}, publisher = {{Elsevier}}, series = {{Journal of Clinical Epidemiology}}, title = {{Assessing the feasibility and impact of clinical trial trustworthiness checks via an application to Cochrane Reviews : Stage 2 of the INSPECT-SR project}}, url = {{http://dx.doi.org/10.1016/j.jclinepi.2025.111824}}, doi = {{10.1016/j.jclinepi.2025.111824}}, volume = {{184}}, year = {{2025}}, }