Compensation for a large gesture-speech asynchrony in instructional videos
(2015) Gesture and Speech in Interaction (GESPIN 4) p.19-23- Abstract
- We investigated the pragmatic effects of gesture-speech lag by asking participants to reconstruct formations of geometric shapes based on instructional films in four conditions: sync, video or audio lag (±1,500 ms), audio only. All three video groups rated the task as less difficult compared to the audio-only group and performed better. The scores were slightly lower when sound preceded gestures (video lag), but not when gestures preceded sound (audio lag). Participants thus compensated for delays of 1.5 seconds in either direction, apparently without making a conscious effort. This greatly exceeds the previously reported time window for automatic multimodal integration.
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/8045773
- author
- Anikin, Andrey
LU
; Nirme, Jens LU ; Alomari, Sarah ; Bonnevier, Joakim and Haake, Magnus LU
- organization
- publishing date
- 2015
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- keywords
- gesture-speech synchronization, multimodal integration, temporal synchronization, comprehension
- host publication
- Gesture and Speech in Interaction - 4th edition (GESPIN 4)
- editor
- Ferré, Gaëlle and Tutton, Mark
- pages
- 19 - 23
- conference name
- Gesture and Speech in Interaction (GESPIN 4)
- conference location
- Nantes, France
- conference dates
- 2015-09-02 - 2015-09-04
- language
- English
- LU publication?
- yes
- id
- 048e9d68-4124-414b-a924-98fd2a516077 (old id 8045773)
- alternative location
- https://hal.archives-ouvertes.fr/hal-01195646
- https://www.lucs.lu.se/wp-content/uploads/2011/12/anakin_nirme_alomari_bonnevier_haake_proc_gespin2015.pdf
- date added to LUP
- 2016-04-04 14:17:58
- date last changed
- 2019-03-08 03:27:59
@inproceedings{048e9d68-4124-414b-a924-98fd2a516077, abstract = {{We investigated the pragmatic effects of gesture-speech lag by asking participants to reconstruct formations of geometric shapes based on instructional films in four conditions: sync, video or audio lag (±1,500 ms), audio only. All three video groups rated the task as less difficult compared to the audio-only group and performed better. The scores were slightly lower when sound preceded gestures (video lag), but not when gestures preceded sound (audio lag). Participants thus compensated for delays of 1.5 seconds in either direction, apparently without making a conscious effort. This greatly exceeds the previously reported time window for automatic multimodal integration.}}, author = {{Anikin, Andrey and Nirme, Jens and Alomari, Sarah and Bonnevier, Joakim and Haake, Magnus}}, booktitle = {{Gesture and Speech in Interaction - 4th edition (GESPIN 4)}}, editor = {{Ferré, Gaëlle and Tutton, Mark}}, keywords = {{gesture-speech synchronization; multimodal integration; temporal synchronization; comprehension}}, language = {{eng}}, pages = {{19--23}}, title = {{Compensation for a large gesture-speech asynchrony in instructional videos}}, url = {{https://hal.archives-ouvertes.fr/hal-01195646}}, year = {{2015}}, }