Using a large language model (ChatGPT) to assess risk of bias in randomized controlled trials of medical interventions : protocol for a pilot study of interrater agreement with human reviewers
(2025) In BMC Medical Research Methodology 25(1).- Abstract
BACKGROUND: Risk of bias (RoB) assessment is an essential part of systematic reviews that requires reading and understanding each eligible trial and RoB tools. RoB assessment is subject to human error and is time-consuming. Machine learning-based tools have been developed to automate RoB assessment using simple models trained on limited corpuses. ChatGPT is a conversational agent based on a large language model (LLM) that was trained on an internet-scale corpus and has demonstrated human-like abilities in multiple areas including healthcare. LLMs might be able to support systematic reviewing tasks such as assessing RoB. We aim to assess interrater agreement in overall (rather than domain-level) RoB assessment between human reviewers and... (More)
BACKGROUND: Risk of bias (RoB) assessment is an essential part of systematic reviews that requires reading and understanding each eligible trial and RoB tools. RoB assessment is subject to human error and is time-consuming. Machine learning-based tools have been developed to automate RoB assessment using simple models trained on limited corpuses. ChatGPT is a conversational agent based on a large language model (LLM) that was trained on an internet-scale corpus and has demonstrated human-like abilities in multiple areas including healthcare. LLMs might be able to support systematic reviewing tasks such as assessing RoB. We aim to assess interrater agreement in overall (rather than domain-level) RoB assessment between human reviewers and ChatGPT, in randomized controlled trials of interventions within medical interventions.
METHODS: We will randomly select 100 individually- or cluster-randomized, parallel, two-arm trials of medical interventions from recent Cochrane systematic reviews that have been assessed using the RoB1 or RoB2 family of tools. We will exclude reviews and trials that were performed under emergency conditions (e.g., COVID-19), as well as public health and welfare interventions. We will use 25 of the trials and human RoB assessments to engineer a ChatGPT prompt for assessing overall RoB, based on trial methods text. We will obtain ChatGPT assessments of RoB for the remaining 75 trials and human assessments. We will then estimate interrater agreement using Cohen's κ.
RESULTS: The primary outcome for this study is overall human-ChatGPT interrater agreement. We will report observed agreement with an exact 95% confidence interval, expected agreement under random assessment, Cohen's κ, and a p-value testing the null hypothesis of no difference in agreement. Several other analyses are also planned.
CONCLUSIONS: This study is likely to provide the first evidence on interrater agreement between human RoB assessments and those provided by LLMs and will inform subsequent research in this area.
(Less)
- author
- Rose, Christopher James
; Bidonde, Julia
; Ringsten, Martin
LU
; Glanville, Julie ; Berg, Rigmor C ; Cooper, Chris ; Muller, Ashley Elizabeth ; Bergsund, Hans Bugge ; Meneses-Echavez, Jose F and Potrebny, Thomas
- organization
- publishing date
- 2025-07-31
- type
- Contribution to journal
- publication status
- published
- subject
- keywords
- Humans, Bias, Generative Artificial Intelligence, Large Language Models, Machine Learning, Observer Variation, Pilot Projects, Randomized Controlled Trials as Topic/methods, Research Design, Risk Assessment/methods, Systematic Reviews as Topic
- in
- BMC Medical Research Methodology
- volume
- 25
- issue
- 1
- article number
- 182
- publisher
- BioMed Central (BMC)
- external identifiers
-
- scopus:105012266391
- pmid:40745627
- ISSN
- 1471-2288
- DOI
- 10.1186/s12874-025-02631-0
- language
- English
- LU publication?
- yes
- additional info
- © 2025. The Author(s).
- id
- 6e17994a-029b-4403-8281-203522e26e25
- date added to LUP
- 2025-08-11 13:03:11
- date last changed
- 2025-08-12 04:07:22
@article{6e17994a-029b-4403-8281-203522e26e25, abstract = {{<p>BACKGROUND: Risk of bias (RoB) assessment is an essential part of systematic reviews that requires reading and understanding each eligible trial and RoB tools. RoB assessment is subject to human error and is time-consuming. Machine learning-based tools have been developed to automate RoB assessment using simple models trained on limited corpuses. ChatGPT is a conversational agent based on a large language model (LLM) that was trained on an internet-scale corpus and has demonstrated human-like abilities in multiple areas including healthcare. LLMs might be able to support systematic reviewing tasks such as assessing RoB. We aim to assess interrater agreement in overall (rather than domain-level) RoB assessment between human reviewers and ChatGPT, in randomized controlled trials of interventions within medical interventions.</p><p>METHODS: We will randomly select 100 individually- or cluster-randomized, parallel, two-arm trials of medical interventions from recent Cochrane systematic reviews that have been assessed using the RoB1 or RoB2 family of tools. We will exclude reviews and trials that were performed under emergency conditions (e.g., COVID-19), as well as public health and welfare interventions. We will use 25 of the trials and human RoB assessments to engineer a ChatGPT prompt for assessing overall RoB, based on trial methods text. We will obtain ChatGPT assessments of RoB for the remaining 75 trials and human assessments. We will then estimate interrater agreement using Cohen's κ.</p><p>RESULTS: The primary outcome for this study is overall human-ChatGPT interrater agreement. We will report observed agreement with an exact 95% confidence interval, expected agreement under random assessment, Cohen's κ, and a p-value testing the null hypothesis of no difference in agreement. Several other analyses are also planned.</p><p>CONCLUSIONS: This study is likely to provide the first evidence on interrater agreement between human RoB assessments and those provided by LLMs and will inform subsequent research in this area.</p>}}, author = {{Rose, Christopher James and Bidonde, Julia and Ringsten, Martin and Glanville, Julie and Berg, Rigmor C and Cooper, Chris and Muller, Ashley Elizabeth and Bergsund, Hans Bugge and Meneses-Echavez, Jose F and Potrebny, Thomas}}, issn = {{1471-2288}}, keywords = {{Humans; Bias; Generative Artificial Intelligence; Large Language Models; Machine Learning; Observer Variation; Pilot Projects; Randomized Controlled Trials as Topic/methods; Research Design; Risk Assessment/methods; Systematic Reviews as Topic}}, language = {{eng}}, month = {{07}}, number = {{1}}, publisher = {{BioMed Central (BMC)}}, series = {{BMC Medical Research Methodology}}, title = {{Using a large language model (ChatGPT) to assess risk of bias in randomized controlled trials of medical interventions : protocol for a pilot study of interrater agreement with human reviewers}}, url = {{http://dx.doi.org/10.1186/s12874-025-02631-0}}, doi = {{10.1186/s12874-025-02631-0}}, volume = {{25}}, year = {{2025}}, }