Multisensory Integration is a phenomenon that occurs when multiple senses pick up information from the same stimulus or event, ultimately enhancing its physiological significance. The stimuli produces different forms of physical energy (photons, sound waves, etc.) that are picked up by the senses, translated into signals, and then recognized and processed by different areas of the brain. As a result, the brain is better able to make a judgment about the event, and create faster responses to the stimuli than would otherwise be possible. One of the clear advantages of multisensory integration is that the brain can compensate for a sense when it is compromised and becomes unavailable. For example, when trying to navigate a dark room, we resort to using touch and feel to identify corners, walls, and anything that could be in our way. In that way, multisensory integration helps us with tasks in our daily lives and more importantly, helps us survive and thrive in our environment.
Although there are areas of the brain that process information picked up by a single sense, such as the occipital lobe which translates visual information (light, colour, motion, etc.), there are multiple areas that process information from more than one sense. For example, the job of the Superior Colliculus, which is located in the middle of the brain, is to integrate cues across different sensory modalities. What I mean by this is that there are neurons in the Superior Colliculus that are responsible for processing multi-censory information. This part of the brain has been a successful model for studying multisensory phenomena inside the brain, especially that which pertains to speech perception.
The McGurk Effect
The McGurk Effect is an example of how multisensory integration helps the brain understand speech. This audio-visual illusion occurs when a person perceives that the movements of someone’s lips do not correspond to the phonetic sounds being produced by their mouth. The result is a change in the percieved sound being produced. In the video, a man is repeating a single syllable: “Ba, Ba, Ba”. As we watch his lips, it is undeniable that the sound is being produces with a bi-labial “B” as we see both of his lips being pressed together. But as the video changes, it appears that he is now producing the sound with a labio-dental movement, with his top teeth being pressed against his bottom lip. Although the narrator informs us that the sound being played has not changed, we as viewers now hear a different sound: “Fa, Fa, Fa”; we perceive that a different sound is being produced because the visual information has changed. This phenomenon is an example of how multisensory integration affects speech perception.