Audiovisual speech integration is a core aspect of human communication, enabling individuals to combine auditory and visual cues into a unique percept that facilitates efficient speech processing. This ability develops early in life, shaped by sensory experience during specific developmental windows known as sensitive periods. In a series of studies, the thesis investigates how the brain learns to process and how it combines audiovisual speech signals. To this end, we measured neural tracking of speech, i.e. how the brain synchronizes to continuous and natural speech, using electroencephalography (EEG). The first study explored how neural tracking of continuous audiovisual speech matures across childhood and adolescence. The results revealed that the brain’s ability to track acoustic speech and lip movements follows distinct developmental trajectories. The second study focused on the role of postnatal sensory experience in shaping audiovisual speech processing. Leveraging a natural model of temporary auditory and audiovisual deprivation—children with cochlear implants (a neural prosthesis that allows to restore auditory functions in case of profound deafness)—we found that only those who received auditory input within their first year of life developed efficient neural responses to audiovisual speech. These findings provide strong evidence for a sensitive period during which auditory input is essential for the typical development of audiovisual speech integration. In the third study, we explored the transient effects of auditory and/or visual deprivation in adults by using face masks as an ecological model of investigation. Face masks introduced acoustic degradation and visual occlusion of the mouth, which were found to have distinct impacts on the neural processing of speech signals. Taken together, these studies underscore the complex and interdependent nature of audiovisual speech processing and highlight the critical role of sensory experience in shaping its development. The findings of this PhD thesis have important implications for our understanding of the multisensory brain in both typical and atypical developmental trajectories.

Neural processing of audiovisual speech signals: prerequisites, trajectories and constraints of a multifaced brain function / Fantoni, Marta. - (2025 Oct 10). [10.13118/fantoni-marta_phd2025-10-10]

Neural processing of audiovisual speech signals: prerequisites, trajectories and constraints of a multifaced brain function

Fantoni Marta
2025

Abstract

Audiovisual speech integration is a core aspect of human communication, enabling individuals to combine auditory and visual cues into a unique percept that facilitates efficient speech processing. This ability develops early in life, shaped by sensory experience during specific developmental windows known as sensitive periods. In a series of studies, the thesis investigates how the brain learns to process and how it combines audiovisual speech signals. To this end, we measured neural tracking of speech, i.e. how the brain synchronizes to continuous and natural speech, using electroencephalography (EEG). The first study explored how neural tracking of continuous audiovisual speech matures across childhood and adolescence. The results revealed that the brain’s ability to track acoustic speech and lip movements follows distinct developmental trajectories. The second study focused on the role of postnatal sensory experience in shaping audiovisual speech processing. Leveraging a natural model of temporary auditory and audiovisual deprivation—children with cochlear implants (a neural prosthesis that allows to restore auditory functions in case of profound deafness)—we found that only those who received auditory input within their first year of life developed efficient neural responses to audiovisual speech. These findings provide strong evidence for a sensitive period during which auditory input is essential for the typical development of audiovisual speech integration. In the third study, we explored the transient effects of auditory and/or visual deprivation in adults by using face masks as an ecological model of investigation. Face masks introduced acoustic degradation and visual occlusion of the mouth, which were found to have distinct impacts on the neural processing of speech signals. Taken together, these studies underscore the complex and interdependent nature of audiovisual speech processing and highlight the critical role of sensory experience in shaping its development. The findings of this PhD thesis have important implications for our understanding of the multisensory brain in both typical and atypical developmental trajectories.
10-ott-2025
35
Cognitive, Computational and Social Neuroscience
Neuropsicologia e Neuroscienze Cognitive
BOTTARI, DAVIDE
File in questo prodotto:
File Dimensione Formato  
PhD_Thesis_Fantoni.pdf

embargo fino al 31/10/2028

Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 2.8 MB
Formato Adobe PDF
2.8 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.11771/40800
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
social impact