This theory was first developed by the psychologists Alvin Liberman and Franklin Cooper in the 1950s. Today it is still one of the most argued theories of cognitive psychology. The base of this hypothesis is that people perceive spoken words not by identifying the acoustic signal, but by identifying the vocal tract gestures with which the words are pronounced; it is also claimed to be an innate ability and specific for humans. Three main claims followed the theory: 1) speech processing is special, 2) perceiving speech is perceiving gestures, 3) the motor system is recruited for perceiving speech.
The psychologists developed this theory after an unexpected failure of a reading machine intended for blind people; the participants failed to learn apparently for their inability to perceive alphabetic sequences at practically useful rates; at those rates the participants couldn’t identify the individual sounds in the sequence, the sounds merged into a blur. The psychologists used a spectrograph to explore the acoustic structures of speech and discovered that the phonetic segments are co-articulated, the vocal tract gestures for successive consonants and vowels overlap temporally. Liberman considered speech not as an acoustic alphabet or “cipher”, but an intricate “code”.
Throughout the 20th and the 21st century many scientists and psychologists have argued this theory in different positions. One fact that could support the theory is that the infants mimic the speech they hear; this led to an association between articulation and its sensory perception. Recently the discover of mirror neurons renewed the interest for this theory even if there are many contrasting views for this concept. One more fact that could prove the assumption is given by studies conducted on aphasia syndrome; this medical condition is characterised by a severe deficit in speech comprehension but a well-preserved ability to repeat the sounds heard. On another hand an interesting theory critics the work of Liberman and colleagues; the hypothesis affirm that the speech perception is affected even by non-productive sources, such as context; individual words are hard to decipher and understand in isolation but easy when they’re heard in sentence context; in this way the speech perception would depend by many other factors external from human psychology.