ALBA : Adaptive Language-Based Assessments for Mental Health
(2024) 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024 1. p.2466-2478- Abstract
Mental health issues differ widely among individuals, with varied signs and symptoms. Recently, language-based assessments have shown promise in capturing this diversity, but they require a substantial sample of words per person for accuracy. This work introduces the task of Adaptive Language-Based Assessment (ALBA), which involves adaptively ordering questions while also scoring an individual’s latent psychological trait using limited language responses to previous questions. To this end, we develop adaptive testing methods under two psychometric measurement theories: Classical Test Theory and Item Response Theory. We empirically evaluate ordering and scoring strategies, organizing into two new methods: a semi-supervised item response... (More)
Mental health issues differ widely among individuals, with varied signs and symptoms. Recently, language-based assessments have shown promise in capturing this diversity, but they require a substantial sample of words per person for accuracy. This work introduces the task of Adaptive Language-Based Assessment (ALBA), which involves adaptively ordering questions while also scoring an individual’s latent psychological trait using limited language responses to previous questions. To this end, we develop adaptive testing methods under two psychometric measurement theories: Classical Test Theory and Item Response Theory. We empirically evaluate ordering and scoring strategies, organizing into two new methods: a semi-supervised item response theory-based method (ALIRT) and a supervised Actor-Critic model. While we found both methods to improve over non-adaptive baselines, We found ALIRT to be the most accurate and scalable, achieving the highest accuracy with fewer questions (e.g., Pearson r ≈ 0.93 after only 3 questions as compared to typically needing at least 7 questions). In general, adaptive language-based assessments of depression and anxiety were able to utilize a smaller sample of language without compromising validity or large computational costs.
(Less)
- author
- Varadarajan, Vasudha
LU
; Sikström, Sverker
LU
; Kjell, Oscar N.E. LU
and Schwartz, H. Andrew
- organization
- publishing date
- 2024
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- host publication
- Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- editor
- Duh, Kevin ; Gomez, Helena and Bethard, Steven
- volume
- 1
- pages
- 13 pages
- publisher
- Association for Computational Linguistics (ACL)
- conference name
- 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2024
- conference location
- Hybrid, Mexico City, Mexico
- conference dates
- 2024-06-16 - 2024-06-21
- external identifiers
-
- pmid:40093858
- scopus:85200248617
- ISBN
- 9798891761148
- DOI
- 10.18653/v1/2024.naacl-long.136
- language
- English
- LU publication?
- yes
- id
- d2ec26f2-9f35-4bed-8785-72fdb46248bd
- date added to LUP
- 2024-09-17 11:26:34
- date last changed
- 2025-07-09 15:06:46
@inproceedings{d2ec26f2-9f35-4bed-8785-72fdb46248bd, abstract = {{<p>Mental health issues differ widely among individuals, with varied signs and symptoms. Recently, language-based assessments have shown promise in capturing this diversity, but they require a substantial sample of words per person for accuracy. This work introduces the task of Adaptive Language-Based Assessment (ALBA), which involves adaptively ordering questions while also scoring an individual’s latent psychological trait using limited language responses to previous questions. To this end, we develop adaptive testing methods under two psychometric measurement theories: Classical Test Theory and Item Response Theory. We empirically evaluate ordering and scoring strategies, organizing into two new methods: a semi-supervised item response theory-based method (ALIRT) and a supervised Actor-Critic model. While we found both methods to improve over non-adaptive baselines, We found ALIRT to be the most accurate and scalable, achieving the highest accuracy with fewer questions (e.g., Pearson r ≈ 0.93 after only 3 questions as compared to typically needing at least 7 questions). In general, adaptive language-based assessments of depression and anxiety were able to utilize a smaller sample of language without compromising validity or large computational costs.</p>}}, author = {{Varadarajan, Vasudha and Sikström, Sverker and Kjell, Oscar N.E. and Schwartz, H. Andrew}}, booktitle = {{Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies}}, editor = {{Duh, Kevin and Gomez, Helena and Bethard, Steven}}, isbn = {{9798891761148}}, language = {{eng}}, pages = {{2466--2478}}, publisher = {{Association for Computational Linguistics (ACL)}}, title = {{ALBA : Adaptive Language-Based Assessments for Mental Health}}, url = {{http://dx.doi.org/10.18653/v1/2024.naacl-long.136}}, doi = {{10.18653/v1/2024.naacl-long.136}}, volume = {{1}}, year = {{2024}}, }