Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

The ALIRT Model: An Adaptive Language-Based Assessment Model for Diagnosing Mental Disorders

Böhme, Rebecca Astrid LU (2024) PSYP01 20241
Department of Psychology
Abstract
In this project, I investigate the combination of Natural Language Processing (NLP)
with Item Response Theory (IRT) to advance the assessment and scoring of open response
items. Traditional psychometric assessments are grounded in well-defined quality standards, yet assessments based on natural language have missed meeting these standards. To address this gap, I integrate NLP processed open response items in an IRT framework, aiming to combine the strengths of natural language processing and item response theory to enhance mental health assessment and establish a foundation for computerized adaptive testing.
In this study, I address three central research questions: the adequacy of newly
developed open response items in capturing DSM-5... (More)
In this project, I investigate the combination of Natural Language Processing (NLP)
with Item Response Theory (IRT) to advance the assessment and scoring of open response
items. Traditional psychometric assessments are grounded in well-defined quality standards, yet assessments based on natural language have missed meeting these standards. To address this gap, I integrate NLP processed open response items in an IRT framework, aiming to combine the strengths of natural language processing and item response theory to enhance mental health assessment and establish a foundation for computerized adaptive testing.
In this study, I address three central research questions: the adequacy of newly
developed open response items in capturing DSM-5 criteria for initial mental health
assessments, the accuracy and efficiency of the ALIRT model in diagnosing common mental
disorders, and the improvement in validity when combining open response items with
traditional rating scales. I hypothesize that the ALIRT model will provide accurate and valid initial diagnoses. It will require fewer questions than traditional methods and show higher accuracy in categorizing mental health disorders through open-response items. I further hypothesize that it offers greater ecological validity and reduced diagnostic time, and that open-ended responses will be preferred over traditional rating scales.
The findings, while limited, offer valuable insights for the future development of this
approach. The model comparison indicates that the mixed model is superior in the current
modeling approach. However, I discuss several limitations encountered during the study,
including the complexities of integrating open responses into an IRT framework. Future work will focus on addressing the identified limitations, refining the model, and exploring additional applications of this approach in computerized adaptive mental health assessments. (Less)
Please use this url to cite or link to this publication:
author
Böhme, Rebecca Astrid LU
supervisor
organization
course
PSYP01 20241
year
type
H2 - Master's Degree (Two Years)
subject
keywords
mental health, assessment, artificial intelligence, natural language processing, item response theory
language
English
id
9174287
date added to LUP
2024-09-11 16:29:02
date last changed
2024-09-11 16:29:02
@misc{9174287,
  abstract     = {{In this project, I investigate the combination of Natural Language Processing (NLP)
with Item Response Theory (IRT) to advance the assessment and scoring of open response
items. Traditional psychometric assessments are grounded in well-defined quality standards, yet assessments based on natural language have missed meeting these standards. To address this gap, I integrate NLP processed open response items in an IRT framework, aiming to combine the strengths of natural language processing and item response theory to enhance mental health assessment and establish a foundation for computerized adaptive testing.
In this study, I address three central research questions: the adequacy of newly
developed open response items in capturing DSM-5 criteria for initial mental health
assessments, the accuracy and efficiency of the ALIRT model in diagnosing common mental
disorders, and the improvement in validity when combining open response items with
traditional rating scales. I hypothesize that the ALIRT model will provide accurate and valid initial diagnoses. It will require fewer questions than traditional methods and show higher accuracy in categorizing mental health disorders through open-response items. I further hypothesize that it offers greater ecological validity and reduced diagnostic time, and that open-ended responses will be preferred over traditional rating scales.
The findings, while limited, offer valuable insights for the future development of this
approach. The model comparison indicates that the mixed model is superior in the current
modeling approach. However, I discuss several limitations encountered during the study,
including the complexities of integrating open responses into an IRT framework. Future work will focus on addressing the identified limitations, refining the model, and exploring additional applications of this approach in computerized adaptive mental health assessments.}},
  author       = {{Böhme, Rebecca Astrid}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{The ALIRT Model: An Adaptive Language-Based Assessment Model for Diagnosing Mental Disorders}},
  year         = {{2024}},
}