Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Automatic Implicit Motive Codings Are at Least as Accurate as Humans’ and 99% Faster

Nilsson, August Håkan LU ; Runge, J. Malte ; Ganesan, Adithya V. ; Lövenstierne, Carl Viggo N.G. ; Soni, Nikita and Kjell, Oscar N.E. LU orcid (2025) In Journal of Personality and Social Psychology 128(6). p.1371-1392
Abstract

Implicit motives, nonconscious needs that influence individuals’ behaviors and shape their emotions, have been part of personality research for nearly a century but differ from personality traits. The implicit motive assessment is very resource-intensive, involving expert coding of individuals’ written stories about ambiguous pictures, and has hampered implicit motive research. Using large language models and machine learning techniques, we aimed to create high-quality implicit motive models that are easy for researchers to use. We trained models to code the need for power, achievement, and affiliation (N = 85,028 sentences). The person-level assessments converged strongly with the holdout data, intraclass correlation coefficient,... (More)

Implicit motives, nonconscious needs that influence individuals’ behaviors and shape their emotions, have been part of personality research for nearly a century but differ from personality traits. The implicit motive assessment is very resource-intensive, involving expert coding of individuals’ written stories about ambiguous pictures, and has hampered implicit motive research. Using large language models and machine learning techniques, we aimed to create high-quality implicit motive models that are easy for researchers to use. We trained models to code the need for power, achievement, and affiliation (N = 85,028 sentences). The person-level assessments converged strongly with the holdout data, intraclass correlation coefficient, ICC(1,1) =.85,.87, and.89 for achievement, power, and affiliation, respectively. We demonstrated causal validity by reproducing two classical experimental studies that aroused implicit motives. We let three coders recode sentences where our models and the original coders strongly disagreed. We found that the new coders agreed with our models in 85% of the cases (p <.001, ϕ =.69). Using topic and word embedding analyses, we found specific language associated with each motive to have a high face validity. We argue that these models can be used in addition to, or instead of, human coders.We provide a free, user-friendly framework in the established R-package text and a tutorial for researchers to apply the models to their data, as these models reduce the coding time by over 99% and require no cognitive effort for coding. We hope this coding automation will facilitate a historical implicit motive research renaissance.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; ; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
implicit motives, large language models, picture story exercise, power achievement affiliation
in
Journal of Personality and Social Psychology
volume
128
issue
6
pages
22 pages
publisher
American Psychological Association (APA)
external identifiers
  • scopus:105003438553
  • pmid:40208739
ISSN
0022-3514
DOI
10.1037/pspp0000544
language
English
LU publication?
yes
id
7dd21877-8fc2-4a40-84f9-d38e5a78aa0a
date added to LUP
2025-09-19 11:08:42
date last changed
2025-09-20 03:04:03
@article{7dd21877-8fc2-4a40-84f9-d38e5a78aa0a,
  abstract     = {{<p>Implicit motives, nonconscious needs that influence individuals’ behaviors and shape their emotions, have been part of personality research for nearly a century but differ from personality traits. The implicit motive assessment is very resource-intensive, involving expert coding of individuals’ written stories about ambiguous pictures, and has hampered implicit motive research. Using large language models and machine learning techniques, we aimed to create high-quality implicit motive models that are easy for researchers to use. We trained models to code the need for power, achievement, and affiliation (N = 85,028 sentences). The person-level assessments converged strongly with the holdout data, intraclass correlation coefficient, ICC(1,1) =.85,.87, and.89 for achievement, power, and affiliation, respectively. We demonstrated causal validity by reproducing two classical experimental studies that aroused implicit motives. We let three coders recode sentences where our models and the original coders strongly disagreed. We found that the new coders agreed with our models in 85% of the cases (p &lt;.001, ϕ =.69). Using topic and word embedding analyses, we found specific language associated with each motive to have a high face validity. We argue that these models can be used in addition to, or instead of, human coders.We provide a free, user-friendly framework in the established R-package text and a tutorial for researchers to apply the models to their data, as these models reduce the coding time by over 99% and require no cognitive effort for coding. We hope this coding automation will facilitate a historical implicit motive research renaissance.</p>}},
  author       = {{Nilsson, August Håkan and Runge, J. Malte and Ganesan, Adithya V. and Lövenstierne, Carl Viggo N.G. and Soni, Nikita and Kjell, Oscar N.E.}},
  issn         = {{0022-3514}},
  keywords     = {{implicit motives; large language models; picture story exercise; power achievement affiliation}},
  language     = {{eng}},
  number       = {{6}},
  pages        = {{1371--1392}},
  publisher    = {{American Psychological Association (APA)}},
  series       = {{Journal of Personality and Social Psychology}},
  title        = {{Automatic Implicit Motive Codings Are at Least as Accurate as Humans’ and 99% Faster}},
  url          = {{http://dx.doi.org/10.1037/pspp0000544}},
  doi          = {{10.1037/pspp0000544}},
  volume       = {{128}},
  year         = {{2025}},
}