Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions

Hessels, Roy S ; Li, Peitong ; Balali, Sofia ; Teunisse, Martin K ; Poppe, Ronald ; Niehorster, Diederick C LU orcid ; Nyström, Marcus LU orcid ; Benjamins, Jeroen S ; Senju, Atsushi and Salah, Albert A , et al. (2024) In Attention, Perception & Psychophysics 86(8). p.2761-2777
Abstract

In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is... (More)

In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person's actions. When trying to infer gaze location from one's own manual actions, gestures, or speech or that of the other person, only one's own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human-robot interaction.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; ; ; ; ; ; and , et al. (More)
; ; ; ; ; ; ; ; ; and (Less)
organization
publishing date
type
Contribution to journal
publication status
published
subject
in
Attention, Perception & Psychophysics
volume
86
issue
8
pages
17 pages
publisher
Springer
external identifiers
  • scopus:85209378504
  • pmid:39557740
ISSN
1943-3921
DOI
10.3758/s13414-024-02978-4
language
English
LU publication?
yes
additional info
© 2024. The Author(s).
id
f0518436-6a04-4f3b-8d62-ccfa0016b6df
date added to LUP
2024-11-24 17:41:35
date last changed
2025-07-08 11:16:41
@article{f0518436-6a04-4f3b-8d62-ccfa0016b6df,
  abstract     = {{<p>In human interactions, gaze may be used to acquire information for goal-directed actions, to acquire information related to the interacting partner's actions, and in the context of multimodal communication. At present, there are no models of gaze behavior in the context of vision that adequately incorporate these three components. In this study, we aimed to uncover and quantify patterns of within-person gaze-action coupling, gaze-gesture and gaze-speech coupling, and coupling between one person's gaze and another person's manual actions, gestures, or speech (or exogenous attraction of gaze) during dyadic collaboration. We showed that in the context of a collaborative Lego Duplo-model copying task, within-person gaze-action coupling is strongest, followed by within-person gaze-gesture coupling, and coupling between gaze and another person's actions. When trying to infer gaze location from one's own manual actions, gestures, or speech or that of the other person, only one's own manual actions were found to lead to better inference compared to a baseline model. The improvement in inferring gaze location was limited, contrary to what might be expected based on previous research. We suggest that inferring gaze location may be most effective for constrained tasks in which different manual actions follow in a quick sequence, while gaze-gesture and gaze-speech coupling may be stronger in unconstrained conversational settings or when the collaboration requires more negotiation. Our findings may serve as an empirical foundation for future theory and model development, and may further be relevant in the context of action/intention prediction for (social) robotics and effective human-robot interaction.</p>}},
  author       = {{Hessels, Roy S and Li, Peitong and Balali, Sofia and Teunisse, Martin K and Poppe, Ronald and Niehorster, Diederick C and Nyström, Marcus and Benjamins, Jeroen S and Senju, Atsushi and Salah, Albert A and Hooge, Ignace T C}},
  issn         = {{1943-3921}},
  language     = {{eng}},
  month        = {{11}},
  number       = {{8}},
  pages        = {{2761--2777}},
  publisher    = {{Springer}},
  series       = {{Attention, Perception & Psychophysics}},
  title        = {{Gaze-action coupling, gaze-gesture coupling, and exogenous attraction of gaze in dyadic interactions}},
  url          = {{http://dx.doi.org/10.3758/s13414-024-02978-4}},
  doi          = {{10.3758/s13414-024-02978-4}},
  volume       = {{86}},
  year         = {{2024}},
}