Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Visual and auditory cueing direct visual predictions in immersive embodied locomotion

Kondyli, Vasiliki LU ; Leszczynski, Marcin and Bhatt, Mehul (2024) 18th European Workshop on Imagery and Cognition
Abstract
Everyday tasks such as driving, and cycling, involve continuous predictions to fulfil the requirements of situation awareness. In these tasks, it is common for people to use predictions to address visual occlusion events or moments when information is out of sight as a result of human embodiment and natural locomotion. While the visual system is efficient in predicting small-scale visual actions (i.e. saccades), it is unclear how predictive vision generalizes to complex tasks that involve multimodal stimuli (i.e., visual and auditory cues) and large-scale visual actions (i.e., head and body movement). Here, we test whether visual and auditory cues can set predictions for the visual system and if these predictions facilitate perceptual... (More)
Everyday tasks such as driving, and cycling, involve continuous predictions to fulfil the requirements of situation awareness. In these tasks, it is common for people to use predictions to address visual occlusion events or moments when information is out of sight as a result of human embodiment and natural locomotion. While the visual system is efficient in predicting small-scale visual actions (i.e. saccades), it is unclear how predictive vision generalizes to complex tasks that involve multimodal stimuli (i.e., visual and auditory cues) and large-scale visual actions (i.e., head and body movement). Here, we test whether visual and auditory cues can set predictions for the visual system and if these predictions facilitate perceptual judgment across head turns and anticipatory gaze during continuous embodied activities. We test this hypothesis in two VR studies where participants (N=40) naturally drive or cycle in an urban environment, while addressing incidents such as overtaking, occluded pedestrians, turns with limited visibility, etc. Study 1 focuses on visual cueing in occlusion events during driving, and Study 2 examines the effect of auditory cueing on anticipatory attention during overtaking incidents while cycling. We analyse multimodal data including gaze, head movements, steering and braking. The analysis, currently in progress, suggests that visual cueing leads to an increase in fixations on areas-ofinterest (AOI) where information is expected to emerge, adjusted head movements towards these AOI, and better reaction times in target detection and driving. Similarly, auditory cuing appears crucial for anticipatory attention, with gaze systematically directed towards the cue even when it is out of sight. Taken together, we present preliminary outcomes showing how small directional biases in gaze and head movements can be achieved with visual and auditory cueing and the effect both cues have on facilitating rapid predictions across large-scale visual actions directly connected to the embodied visuo-locomotive experience. (Less)
Please use this url to cite or link to this publication:
author
; and
publishing date
type
Contribution to conference
publication status
published
subject
conference name
18th European Workshop on Imagery and Cognition
conference location
Naples, Italy
conference dates
2024-06-13 - 2024-06-15
language
English
LU publication?
no
id
c46944e2-81b6-4fd7-a577-215f8c45cabd
date added to LUP
2025-06-28 21:00:29
date last changed
2025-07-01 10:34:54
@misc{c46944e2-81b6-4fd7-a577-215f8c45cabd,
  abstract     = {{Everyday tasks such as driving, and cycling, involve continuous predictions to fulfil the requirements of situation awareness. In these tasks, it is common for people to use predictions to address visual occlusion events or moments when information is out of sight as a result of human embodiment and natural locomotion. While the visual system is efficient in predicting small-scale visual actions (i.e. saccades), it is unclear how predictive vision generalizes to complex tasks that involve multimodal stimuli (i.e., visual and auditory cues) and large-scale visual actions (i.e., head and body movement). Here, we test whether visual and auditory cues can set predictions for the visual system and if these predictions facilitate perceptual judgment across head turns and anticipatory gaze during continuous embodied activities. We test this hypothesis in two VR studies where participants (N=40) naturally drive or cycle in an urban environment, while addressing incidents such as overtaking, occluded pedestrians, turns with limited visibility, etc. Study 1 focuses on visual cueing in occlusion events during driving, and Study 2 examines the effect of auditory cueing on anticipatory attention during overtaking incidents while cycling. We analyse multimodal data including gaze, head movements, steering and braking. The analysis, currently in progress, suggests that visual cueing leads to an increase in fixations on areas-ofinterest (AOI) where information is expected to emerge, adjusted head movements towards these AOI, and better reaction times in target detection and driving. Similarly, auditory cuing appears crucial for anticipatory attention, with gaze systematically directed towards the cue even when it is out of sight. Taken together, we present preliminary outcomes showing how small directional biases in gaze and head movements can be achieved with visual and auditory cueing and the effect both cues have on facilitating rapid predictions across large-scale visual actions directly connected to the embodied visuo-locomotive experience.}},
  author       = {{Kondyli, Vasiliki and Leszczynski, Marcin and Bhatt, Mehul}},
  language     = {{eng}},
  month        = {{06}},
  title        = {{Visual and auditory cueing direct visual predictions in immersive embodied locomotion}},
  year         = {{2024}},
}