Advanced

Cross-modal integration of affective facial expression and vocal prosody : an EEG study

Weed, Ethan and Christensen, Peer LU (2011) The 3rd Conference of the Scandinavian Association for Language and Cognition
Abstract
We have all experienced how a telephone conversation can be more challenging than speaking face to face. Understanding the intended meaning of a speaker‘s words requires forming an impression of the current mental state of the speaker, including her beliefs, intentions, and emotional state (Sperber & Wilson, 1995). Facial expressions are an important source of this information. In this study, we wondered at what point emotional information was integrated in the processing stream. We hypothesized that the N400 component, which is sensitive to meaning at a variety of levels (Lau, Phillips, & Poeppel, 2008; Van Berkum, Van Den Brink, Tesink, Kos, & Hagoort, 2008), would be affected by incongruous emotions in face/voice pairs.To... (More)
We have all experienced how a telephone conversation can be more challenging than speaking face to face. Understanding the intended meaning of a speaker‘s words requires forming an impression of the current mental state of the speaker, including her beliefs, intentions, and emotional state (Sperber & Wilson, 1995). Facial expressions are an important source of this information. In this study, we wondered at what point emotional information was integrated in the processing stream. We hypothesized that the N400 component, which is sensitive to meaning at a variety of levels (Lau, Phillips, & Poeppel, 2008; Van Berkum, Van Den Brink, Tesink, Kos, & Hagoort, 2008), would be affected by incongruous emotions in face/voice pairs.To test this, we used EEG to record brain responses to congruous and incongruous face/voice stimuli in an oddball paradigm. Participants viewed faces showing either a happy or a sad expression. As they viewed the faces, participants heard a variety of spoken utterances delivered in either a sad or happy tone of voice.We found that incongruent facial expressions affected auditory processing of spoken stimuli at surprisingly early stages of the processing stream. Not only did we observe an N400-like effect in the incongruent condition, suggesting an attempt to integrate the incongruent facial and vocal stimuli, we also found that incongruent auditory stimuli elicited a larger N100 wave.Our results show that as early as 100 msec after onset of spoken utterances, the brain has made an initial comparison of the affect expressed by the speaker‘s facial expression, and that expressed by vocal prosody. This suggests that early multi-modal brain areas, as well as ―higher-level‖ areas, are involved in computations which may be critical to interpretation of speaker meaning, and that integration of face/voice affective information takes place long before an utterance is completed. (Less)
Please use this url to cite or link to this publication:
author
publishing date
type
Contribution to conference
publication status
published
subject
conference name
The 3rd Conference of the Scandinavian Association for Language and Cognition
language
English
LU publication?
no
id
0a9022db-9e82-48a2-8231-1cfbaba5a1d0
date added to LUP
2016-09-27 08:44:40
date last changed
2016-09-27 09:08:09
@misc{0a9022db-9e82-48a2-8231-1cfbaba5a1d0,
  abstract     = {We have all experienced how a telephone conversation can be more challenging than speaking face to face. Understanding the intended meaning of a speaker‘s words requires forming an impression of the current mental state of the speaker, including her beliefs, intentions, and emotional state (Sperber & Wilson, 1995). Facial expressions are an important source of this information. In this study, we wondered at what point emotional information was integrated in the processing stream. We hypothesized that the N400 component, which is sensitive to meaning at a variety of levels (Lau, Phillips, & Poeppel, 2008; Van Berkum, Van Den Brink, Tesink, Kos, & Hagoort, 2008), would be affected by incongruous emotions in face/voice pairs.To test this, we used EEG to record brain responses to congruous and incongruous face/voice stimuli in an oddball paradigm. Participants viewed faces showing either a happy or a sad expression. As they viewed the faces, participants heard a variety of spoken utterances delivered in either a sad or happy tone of voice.We found that incongruent facial expressions affected auditory processing of spoken stimuli at surprisingly early stages of the processing stream. Not only did we observe an N400-like effect in the incongruent condition, suggesting an attempt to integrate the incongruent facial and vocal stimuli, we also found that incongruent auditory stimuli elicited a larger N100 wave.Our results show that as early as 100 msec after onset of spoken utterances, the brain has made an initial comparison of the affect expressed by the speaker‘s facial expression, and that expressed by vocal prosody. This suggests that early multi-modal brain areas, as well as ―higher-level‖ areas, are involved in computations which may be critical to interpretation of speaker meaning, and that integration of face/voice affective information takes place long before an utterance is completed.},
  author       = {Weed, Ethan and Christensen, Peer},
  language     = {eng},
  title        = {Cross-modal integration of affective facial expression and vocal prosody : an EEG study},
  year         = {2011},
}