Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

The Power of Words: ChatGPT’s Assessments of Depression and Anxiety Using Responses to Open-ended Questions

Brunsberg, Sophie LU ; Holmlund Vidman, Linnea LU and Jarbo, Ragna LU (2024) PSYK11 20241
Department of Psychology
Abstract (Swedish)
Syftet med den här studien var att undersöka ChatGPTs förmåga att uppskatta nivåer av depression och ångest. ChatGPT blev genom en prompt instruerad att uppskatta en deltagares nivå av depression och ångest, baserat på deltagarnas svar på öppna frågor om deras symptom de senaste två veckorna. Totalt inkluderade studien ordresponser från 876 deltagare. Uppskattade poäng från ChatGPT jämfördes med deltagarnas poäng på PHQ-9 respektive GAD-7 skalan. Olika prompts testades för att utforska inverkan av olika design på prompts (H1), samt av tillagt information kring orsaken bakom deltagarnas symptom (H2). ChatGPTs förmåga jämfördes med BERT-modellen (H3). Resultaten visade att ChatGPT framgångsrikt bedömde nivåer av depression och ångest,... (More)
Syftet med den här studien var att undersöka ChatGPTs förmåga att uppskatta nivåer av depression och ångest. ChatGPT blev genom en prompt instruerad att uppskatta en deltagares nivå av depression och ångest, baserat på deltagarnas svar på öppna frågor om deras symptom de senaste två veckorna. Totalt inkluderade studien ordresponser från 876 deltagare. Uppskattade poäng från ChatGPT jämfördes med deltagarnas poäng på PHQ-9 respektive GAD-7 skalan. Olika prompts testades för att utforska inverkan av olika design på prompts (H1), samt av tillagt information kring orsaken bakom deltagarnas symptom (H2). ChatGPTs förmåga jämfördes med BERT-modellen (H3). Resultaten visade att ChatGPT framgångsrikt bedömde nivåer av depression och ångest, oberoende av designen på prompt eller information om orsaker kring symptomen. ChatGPTs förmåga visade sig också vara jämförbar, och ibland även överträffa BERTs prestation. Sammantaget visar resultaten lovande potential för att i framtiden använda ChatGPT som ett alternativ till traditionella skattningsskalor. Det krävs däremot vidare forskning för att säkerställa reliabiliteten av ChatGPTs bedömningar. (Less)
Abstract
The aim of this study was to explore the accuracy of ChatGPT’s numeric assessments of depression and anxiety. ChatGPT was prompted to estimate participants’ levels of depression and anxiety using the participants’ responses to an open-ended question about their symptoms during the past two weeks. In total, the study included responses from 876 participants. Estimated scores were compared to scores measured by the PHQ-9 and GAD-7 scale, respectively. Different prompts were tested to explore the impact of prompt design (H1), and the addition of three words describing participants’ perceived reasons for their symptoms (H2). ChatGPT’s performance was also compared to that of the BERT model (H3). The results demonstrated that ChatGPT... (More)
The aim of this study was to explore the accuracy of ChatGPT’s numeric assessments of depression and anxiety. ChatGPT was prompted to estimate participants’ levels of depression and anxiety using the participants’ responses to an open-ended question about their symptoms during the past two weeks. In total, the study included responses from 876 participants. Estimated scores were compared to scores measured by the PHQ-9 and GAD-7 scale, respectively. Different prompts were tested to explore the impact of prompt design (H1), and the addition of three words describing participants’ perceived reasons for their symptoms (H2). ChatGPT’s performance was also compared to that of the BERT model (H3). The results demonstrated that ChatGPT successfully assessed levels of depression and anxiety, regardless of prompt design or the addition of reason. ChatGPT’s performance was also shown to be comparable, and at times even surpass, the performance of BERT. Collectively the results demonstrate promising prospects for further applications of language-based assessments using ChatGPT, as an alternative to traditional rating scales. However, further research is necessary to ensure the reliability of ChatGPT’s measures. (Less)
Please use this url to cite or link to this publication:
author
Brunsberg, Sophie LU ; Holmlund Vidman, Linnea LU and Jarbo, Ragna LU
supervisor
organization
course
PSYK11 20241
year
type
M2 - Bachelor Degree
subject
keywords
ChatGPT, BERT, Mental health assessments, Depression, Anxiety, hälsobedömning, Ångest
language
English
id
9160959
date added to LUP
2024-06-17 15:49:25
date last changed
2024-06-17 15:49:25
@misc{9160959,
  abstract     = {{The aim of this study was to explore the accuracy of ChatGPT’s numeric assessments of depression and anxiety. ChatGPT was prompted to estimate participants’ levels of depression and anxiety using the participants’ responses to an open-ended question about their symptoms during the past two weeks. In total, the study included responses from 876 participants. Estimated scores were compared to scores measured by the PHQ-9 and GAD-7 scale, respectively. Different prompts were tested to explore the impact of prompt design (H1), and the addition of three words describing participants’ perceived reasons for their symptoms (H2). ChatGPT’s performance was also compared to that of the BERT model (H3). The results demonstrated that ChatGPT successfully assessed levels of depression and anxiety, regardless of prompt design or the addition of reason. ChatGPT’s performance was also shown to be comparable, and at times even surpass, the performance of BERT. Collectively the results demonstrate promising prospects for further applications of language-based assessments using ChatGPT, as an alternative to traditional rating scales. However, further research is necessary to ensure the reliability of ChatGPT’s measures.}},
  author       = {{Brunsberg, Sophie and Holmlund Vidman, Linnea and Jarbo, Ragna}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{The Power of Words: ChatGPT’s Assessments of Depression and Anxiety Using Responses to Open-ended Questions}},
  year         = {{2024}},
}