Matching novel face and voice identity using static and dynamic facial images

Smith, HMJ ORCID logoORCID: https://orcid.org/0000-0003-2712-5527, Dunn, AK ORCID logoORCID: https://orcid.org/0000-0003-3226-1734, Baguley, T ORCID logoORCID: https://orcid.org/0000-0002-0477-2492 and Stacey, PC ORCID logoORCID: https://orcid.org/0000-0002-6018-8979, 2016. Matching novel face and voice identity using static and dynamic facial images. Attention, Perception, & Psychophysics. ISSN 1943-3921

[thumbnail of PubSub4072_Dunn.pdf]
Preview
Text
PubSub4072_Dunn.pdf - Published version

Download (904kB) | Preview

Abstract

Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face-voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face – voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face–voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face–voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face–voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face–voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others.

Item Type: Journal article
Publication Title: Attention, Perception, & Psychophysics
Creators: Smith, H.M.J., Dunn, A.K., Baguley, T. and Stacey, P.C.
Publisher: Springer
Date: 2016
ISSN: 1943-3921
Identifiers:
Number
Type
10.3758/s13414-015-1045-8
DOI
Divisions: Schools > School of Social Sciences
Record created by: Linda Sullivan
Date Added: 08 Jan 2016 16:47
Last Modified: 09 Jun 2017 13:58
URI: https://irep.ntu.ac.uk/id/eprint/26739

Actions (login required)

Edit View Edit View

Statistics

Views

Views per month over past year

Downloads

Downloads per month over past year