Skip to main content

Research

Speech memories are like the internal movies of our lives, allowing us to replay conversations we had with friends or to anticipate future responses from colleagues. Unlike movies however, it is unclear how the brain merges information from the senses and forms new memories during speech encoding. This is the question that I am committed to answer in my personal line of research.

I aim to understand how the tight synchrony between visual and auditory information predicts multisensory perception and the formation of rich speech memories, with a main focus on the role of rhythmic brain activities called "neural oscillations".

I combine the presentation of short movie clips with a broad range of electrophysiology techniques in order to establish how rhythmic neuron activities across the brain networks contribute to produce multisensory speech sensation and encode a new trace in memory. Among such techniques, I use electro- and magnetoencephalography (EEG/MEG) with healthy volunteers, as well as intracranial EEG (iEEG) recordings with epilepsy patients. I also develope computational models integrating my empirical findings to predict neural responses during multisensory speech perception.