McGurk effect

perceptual phenomenon that occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound

The McGurk effect shows how hearing and vision are used for speech perception. Named after the man who found it, Harry McGurk (23 February 1936 – 17 April 1998), it says that people hear speech with their ears, and use other senses to help interpret what they hear. The McGurk effect happens when watching a video of a person saying /ga/ with a sound-recording saying /ba/. When this is done, a third sound is heard: /da/.[1][2][3]

The McGurk effect is robust: that is, it still works even if a person knows about it. This is different from some optical illusions, which do not work anymore once a person can see it.

Overview change

The McGurk effect describes a phenomenon that shows how speech perception is not dependent only on auditory information. Visual information in the form of reading the lips is also taken into account and combined with the auditory information that is heard to produce the final stimuli perceived. This can get particularly interesting when the auditory information of one sound, paired with the spoken lips of another sound, ultimately combine to form the perception of a third different sound.[4]

Explanation change

When humans perceive speech, they not only take in auditory information but also visual information as well in the form of reading lips, facial expression, and other bodily cues. Usually, these two sources of information are consistent with each other so the brain simply combines them to form one unified perceived stimulus. When the McGurk effect is tested and the incoming information from the ears and eyes differ, the brain tries to make sense of the contradictory stimuli which ultimately results in a fusion of both.[5] In humans, information received from the eyes dominates other sensory modalities, including audition, so for instance when 'ba' is heard and 'ga' is seen, the resulting stimulus is heard is 'da'.[6] The resulting stimulus is what happens when the brain tries to make sense of the two different sets of information.

Similar work by others change

Around the same time that Harry McGurk and John Macdonald discovered what is now known as the McGurk effect, Barbara Dodd discovered a similar effect with audio-visual speech interpretation but instead it was with the visual cue of 'hole' and the audio cue of 'tough' which ultimately generated the audio perception of 'towel'.[7][8] These discoveries changed the way scientists and researchers view the interaction of different senses in the brain.

Infants change

Infants also show signs similar to the McGurk effect. Obviously an infant cannot be asked what they hear since they cannot verbally communicate but by measuring certain variables such as their attention to audiovisual stimuli, effects similar to the McGurk effect can be seen. Very soon after infants are born, sometimes even within minutes of birth, they are able to imitate adult facial movements; an important first step in audiovisual speech perception. Next comes the ability to recognize lip movements and speech sounds a couple of weeks after birth. Evidence of the McGurk effect is not visible until about 4 months,[9][10] with a much stronger presence at around 5 months.[11][12] To test this effect on infants, infants are first habituated to a stimulus. Once the stimulus gets changed, the infant exhibits an effect similar to the McGurk effect. As infants grow older and continue to develop, the McGurk effect also becomes more prominent as visual cues start to override purely auditory information in audiovisual speech perception.

Effect in other languages and cultures change

Although the McGurk effect has been primarily studied in English because of its origins in English speaking countries, research has spread to others countries with different languages. In particular, the comparison between English and Japanese has been prominent. Research has shown that the McGurk effect is much more prominent in English listeners compared to Japanese listeners.[13] One strong hypothesis for this is the difference between cultures and how each culture behaves and interacts. Japanese culture is notable for politeness and avoiding direct eye or face contact when interacting.[14]

This phenomenon has also been studied in French Canadian children and adults. When compared to adults, children tend to show less susceptibility to the McGurk effect since their primary sense of speech perception is dominated by auditory information. This is evident in children scoring lower on lip reading tasks when compared to adults. Nevertheless, the McGurk effect was present in certain contexts but the effects were much more variable than when the tests were run on adults.[15]

Broader impact on society change

Although the McGurk effect's importance may seem isolated to just psychological researchers and scientists, this phenomenon has been expanded to everyday audio and visual speech perception. Two researchers by the name of Wareham and Wright conducted a study in 2005 that may suggest that the McGurk effect can influence how everyday speech is perceived. This is especially important in witness testimony where the observations and accounts of the witness are usually expected to be accurate and correct. With this information, witness testimonies now must be interpreted with the notion that the witness may be unaware of their own inaccurate perception.[16]

References change

  1. McGurk H. & Lewis M. 1974. Space perception in early infancy: perception within a common auditory space? Science, 186, 649-650.
  2. McGurk H. 1988. Developmental psychology and the vision of speech. Inaugural Professorial Lecture, University of Surrey
  3. McGurk H. & MacDonald J. 1976. Hearing lips and seeing voices. Nature, 264, 746-748.
  4. Nath, A.R.; Beauchamp, M.S. (2012). "A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion". NeuroImage. 59 (1): 781–7. doi:10.1016/j.neuroimage.2011.07.024. PMC 3196040. PMID 21787869.
  5. O’Shea M. 2005. The brain: a very short introduction. Oxford University Press
  6. Colin C., Radeau M. & Deltenre P. 2011. Top-down and bottom-up modulation of audiovisual integration in speech. European Journal of Cognitive Psychology, 17(4), 541-560
  7. Dodd B. 1977. The role of vision in the perception of speech. Perception, 6, 31-40.
  8. Dodd B. & Campbell R. (eds) 1987. Hearing by eye: The psychology of lip-reading. Hillsdale, New Jersey: Lawrence Erlbaum.
  9. Bristow, D., Dehaene-Lambertz, G., Mattout, J., Soares, C., Gliga, T., Baillet, S. & Mangin, J.F. (2009). Hearing faces: How the infant brain matches the face it sees with the speech it hears. Journal of Cognitive Neuroscience, 21(5), 905-921
  10. Burnham, D. & Dodd, B. (2004). Auditory-Visual Speech Integration by Prelinguistic Infants: Perception of an Emergent Consonant in the McGurk Effect. Developmental Psychobiology, 45(4), 204-220
  11. Rosenblum, L.D. (2010). See what I’m saying: The extraordinary powers of our five senses. New York, NY: W. W. Norton & Company Inc.
  12. Rosenblum, L.D., Schmuckler, M.A. & Johnson, J.A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59(3), 347-357
  13. Hisanaga S. et al 2009. Audiovisual speech perception in Japanese and English: inter-language differences examined by event-related potentials. Retrieved from http://www.isca-speech.org/archive_open/avsp09/papers/av09_038.pdf Archived 2016-03-04 at the Wayback Machine
  14. Sekiyama K. 1997. Cultural and linguistic factors in audiovisual speech processing: The McGurk effect in Chinese subjects. Perception and Psychophysics 59(1), 73-80
  15. "Studies of the McGurk effect: implications for theories of speech perception" (PDF). Archived from the original (PDF) on 2017-08-08. Retrieved 2013-11-27.
  16. Schmid G., Thielmann A. & Ziegler W. 2009. The influence of visual and auditory information on the perception of speech and non-speech oral movements in patients with left hemisphere lesions. Clinical Linguistics and Phonetics, 23(3), 208-221

Other websites change

Video examples