Reading fluent speech from talking faces: typical brain networks and individual differences

Hall, D.A., Fussell, C. and Summerfield, A.Q., 2005. Reading fluent speech from talking faces: typical brain networks and individual differences. Journal of Cognitive Neuroscience, 17 (6), pp. 939-953.

193460_1840 Hall PostPrint.pdf

Download (2MB) | Preview


Listeners are able to extract important linguistic information by viewing the talker’s face – a process known as ‘speechreading’. Previous studies of speechreading present small closed sets of simple words and their results indicate that visual speech processing engages a wide network of brain regions in the temporal, frontal, and parietal lobes that are likely to underlie multiple stages of the receptive language system. The present study further explored this network in a large group of subjects by presenting naturally-spoken sentences which tap the richer complexities of visual speech processing. Four different baselines (blank screen, static face, non-linguistic facial gurning, and auditory speech) enabled us to determine the hierarchy of neural processing involved in speechreading and to test the claim that visual input reliably accesses sound-based representations in the auditory cortex. In contrast to passively viewing a blank screen, the static-face condition evoked activation bilaterally across the border of the fusiform gyrus and cerebellum, and in the medial superior frontal gyrus and left precentral gyrus (P < 0.05, whole brain corrected).

Item Type: Journal article
Publication Title: Journal of Cognitive Neuroscience
Creators: Hall, D.A., Fussell, C. and Summerfield, A.Q.
Publisher: MIT Massachusetts Institute of Technology Press
Date: 2005
Volume: 17
Number: 6
Rights: © 2005 The MIT Press
Divisions: Schools > School of Social Sciences
Record created by: EPrints Services
Date Added: 09 Oct 2015 10:45
Last Modified: 19 Oct 2015 14:36

Actions (login required)

Edit View Edit View


Views per month over past year


Downloads per month over past year