Many researchers have established that background noise can be dangerous for humans’ health. There are various consequences for a long exposure to noise such as hearing loss, heart diseases or changes on the immune system; also it is well known that noise can even affect human psychology causing loss of concentration, stress increasing or aggressive behaviours.
A study on social behaviour conducted during the 1970s stated that the increase of background noise can even change the way humans behave in situations in which help is needed; researchers studied the individual willingness to give aid both during and immediately following noise exposure. In an early study conducted in 1975 by the psychologist Charles Korte showed that in areas with low sonic inputs people are more likely to offer small assistance or grant an interview than people in high sonic inputs areas. In 1977, researches proved in an experiment that less helping behaviour occurred in a high-noise condition than low-noise condition.
For some observed effects a simple ‘desire to escape’ explanation could be given: the individual would not stop and give help to avoid a prolonged noise exposure; however, this explanation could not account for every result. Another possible explanation could be that the subjects help less because they are experiencing a negative affective state caused by the noise.
Here’s the link for the original research document http://www.psy.cmu.edu/~scohen/noisechap84.pdf
Echo City is a group of musicians who run projects and performances featuring a range of giant musical instruments crafted by the members. It started as a project with the purpose of the creation of giant instruments and sound sculptures called sonic playground; from 1985 the team has also performed as a band. Some of their first pieces involved a number of conventional instruments playing together with the band’s own devices. Their second album The Sound of Music released by the industrial label Some Bizzarre, was entirely performed using self-made instruments; also the recordings are even more experimental: most of the compositions derived from previous improvisations, they involved field recording examples and the album contained their first use of sampling technology.
Recently the project involved even junk instruments, such as gas pipes, bins, plates, cans, etc. In the workshop Rhythms the rhythmic component derived from traditional Burundi drum patterns.
For me the project is very interesting in terms of experimental music. It is created around the concepts of experimentation, physical creation (of instruments) and improvisation all of which make the band very innovative and peculiar.
In 2017, London’s Science Museum opened a new beautiful startling department, the WonderLab, a huge 7-section gallery entirely dedicated to kids and designed as a sort of futuristic scientific playground. On 25th January the museum celebrated the opening with an evening event where the students of IDA (Interaction Design Arts) of London College of Communication introduced the area presenting some of their works related to theme of the gallery. One project that I personally found very funny and innovative was the VoicePong developed by the student Daichi Barnett Yamamoto; the game is based on the 1972 arcade game Pong, a two dimensional game simulating a table-tennis match; both paddles though were controlled by the voice pitch of the two players who were given a microphone each; the purpose of the game was trying not to loose the ball. It was very funny to see participants screaming on the mics trying to find the right frequency to keep the ball in game.
Daichi is often creating interactive inspiring projects that involve sound or music. He is also a very good friend of mine; therefore, being curious about the processes of development of the piece, I decided to ask him some questions. The first thing I asked him was about the assignment that he’d received and he explained that the project was about the creation of something playful in which you can learn something. He then explained me that this idea came from a revision of project he created in the 2nd year when he was asked to collaborate with Videogame Design students to create an interactive game for a specific space of the school; being all passionate with music, they used sound to create a kind of tug-of-war game activated by screams. At first, for the Science Museum brief, he wanted to create the VoicePong with the purpose of just having fun playing it, successively he realised that at the same time the players would discover their physical ability at playing the game. As expected the game had a big success at the night, creating always a long queue of people that wanted to beat the record; the game is also very compelling, because it involves a collaboration with a 2nd player and it creates a kind of challenge with yourself.
I asked him about the technical side of the project, so how did he physically developed it; he had used the software MAX mainly on the sonic part of the project, while all the visuals were made in Java processing; he programmed MAX to transform the vocal/pitch signal received into numbers that were then communicated to processing and translated into a visual signal. Also he added: “Using two programs seemed weird at first, but then I discovered that the softwares are complementary, so what MAX couldn’t do was done by processing.”
Alvin Lucier is an American composer and sound artist known for his experimental use of innovative electroacoustic devices (at least during the years in which he worked), such as frequency oscillators connected to magnets as we can see in his work Music on a Long Thine Wire 1977, biofeedback and reverberation machines in his piece Clocker 1978; probably best known for his experimentations on the physical properties of the sound, as shown in his most famous piece I Am Sitting in a Room 1969.
One piece I found peculiarly interesting is North American Time Capsule performed in a single version in 1966. This piece was composed at an invitation of Sylvania Applied Research Laboratories which offered the artist the possibility of using one of the first prototypes of vocoder, a device that can transform and synthesise human voice. During the creation of the piece he gave precise and extravagant instructions to the choir he directed: they had to sing, play a musical instrument or just produce any sound that could represent that exact situation to living beings far from Earth’s environment either in space or time. Lucier, helped by the professional engineer Calvin Howard, used the vocoder to isolate and manipulate voices of the performance in real time; he recorded 8 different tracks which he then mixed together. The result of this work is impressive: the voices sound like they were produced by other unknown creatures trying to communicate in an unknown language; in the background, a constant unintelligible mumble is produced, like an alien television left on low volume; some of the voices reminded me of a sort of mechanical bee flying close to the ears; other voices sound like extra-terrestrial radio interferences interrupted by robotic siblings and bursts of white noise; some of them reminded me of the sound of an old squeaking door or a ‘strange’ electric razor.
This composition is considered as one of the most interesting experiments of the non-semantic uses of the human voice and also a very powerful example of speech-processing technology.
This theory was first developed by the psychologists Alvin Liberman and Franklin Cooper in the 1950s. Today it is still one of the most argued theories of cognitive psychology. The base of this hypothesis is that people perceive spoken words not by identifying the acoustic signal, but by identifying the vocal tract gestures with which the words are pronounced; it is also claimed to be an innate ability and specific for humans. Three main claims followed the theory: 1) speech processing is special, 2) perceiving speech is perceiving gestures, 3) the motor system is recruited for perceiving speech.
The psychologists developed this theory after an unexpected failure of a reading machine intended for blind people; the participants failed to learn apparently for their inability to perceive alphabetic sequences at practically useful rates; at those rates the participants couldn’t identify the individual sounds in the sequence, the sounds merged into a blur. The psychologists used a spectrograph to explore the acoustic structures of speech and discovered that the phonetic segments are co-articulated, the vocal tract gestures for successive consonants and vowels overlap temporally. Liberman considered speech not as an acoustic alphabet or “cipher”, but an intricate “code”.
Throughout the 20th and the 21st century many scientists and psychologists have argued this theory in different positions. One fact that could support the theory is that the infants mimic the speech they hear; this led to an association between articulation and its sensory perception. Recently the discover of mirror neurons renewed the interest for this theory even if there are many contrasting views for this concept. One more fact that could prove the assumption is given by studies conducted on aphasia syndrome; this medical condition is characterised by a severe deficit in speech comprehension but a well-preserved ability to repeat the sounds heard. On another hand an interesting theory critics the work of Liberman and colleagues; the hypothesis affirm that the speech perception is affected even by non-productive sources, such as context; individual words are hard to decipher and understand in isolation but easy when they’re heard in sentence context; in this way the speech perception would depend by many other factors external from human psychology.
In 1958 the English psychologist Donald Broadbent proposed the existence of a theoretical filter device located in our brain between the sensory register of incoming information and the short-term memory storage.
The psychologist theorised that humans process information with limited capacity and select them to be processed early; due to this limited capacity a selective filter is needed for the information processing. In his experimentations he made use of the dichotic listening test, a psychological test used to study selective attention within the auditory system; during the experiment the participants had been told to wear headphones in which a different auditory stimuli were presented to each ear at the same time. The participants were then told to attend and remember the information coming into one ear and neglect the information presented to the other one; this test showed the ability of the participants to recall information to the attended channel and the inability to recall the different stimuli in the unattended channel. In the second part of the test, one set of three digits was sent into one ear and another three digits’ set was sent into the other; they were then told to recall all numbers and set them in the order they wanted to; the result showed that the participants would recall the numbers in a ear-by-ear order, instead of any other order: for example, if 782 were presented to one ear and 980 to the other, the recall would be 782980. The function of this filter so would be the prevention of the overloading of the limited capacity mechanism, which is the short-term memory.
This theory is strictly linked to the famous cocktail party effect which shows how humans are able to focus the attention towards the auditory stimuli they find most interesting.
Another interesting theory was developed by a graduate student of Broadbent, Anne Treisman who proposed the Attenuation Theory. This hypothesis suggested that the theoretical filter would also attenuate the stimuli presented to the unattended channel, but if the stimuli pass a sort of threshold it would pass through and be perceived by the brain; the threshold in question would be determined by words’ meaning: important words (such as a name) would have a low threshold and the participant would easily gain awareness, while unimportant words (such as chair) would have an higher threshold to prevent the participant from gaining awareness inappropriately.
The term Schizophonia was coined by the Canadian artist R. Murray Schafer to indicate the separation of an original sound and the recording of it. Before the invention of electroacoustic devices for the recording and the transmission of a sound, every sound was original and indissolubly attached to the source producing it; in the modern World any sound can be recorded and played in different environments. For Schafer the idea of separating a sound from its original source was something aberrational attributed to the development of the 20th century. The main philosophical debate is whether a sound is still original or not when it’s played from another source.
In terms of authenticity, the sound can’t be considered original if it’s not produced by its original source; even if we play that sound back on a hi-fi speaker system the sound will always be an artificial reproduction of an event that created vibrations, that are previously captured by a device. For example, in modern days the most common microphones are digital, meaning that the vibrations emitted by a certain event or object are translated into digital data and codes; those elements are then read by a computer that translates them back into audio signals, artificially reproducing the sonic properties of the event recorded. If we consider field recording, we could assume that none of the captured sounds are actually original, but are just the recording of a fraction of timeline; a sound exists in time, so if even if it was made two times in a row, the second time will be different from the first time and vice versa. This won’t mean that a sound played by a speaker is fake, it will be just technologically reproduced which for me it’s a very interesting concept just considering the idea that we can hear fractions of the past that would never be back, such as the speeches of Nelson Mandela (for example).
Also most of sound design works make a good use of the concept of schizophonia; just think about the fact that many raining scenes in movies are actually dubbed with the sound of frying bacon! This demonstrates that a sound can loose its originality when it’s recorded but it can become even more original if it’s used in different environments or manipulated for a different reason. For Pierre Schaeffer the idea of a sound object is the main objective of musique concrete: a recorded sound, independent from its source, that is then fixated or reproduced through a device. For R. Murray Schafer, schizophonia and the sonic object are almost antagonists.
‘I think it is very useful for this discussion to compare this situation with that of visual creation, in which the freedom to deal with similar separations of elements of reality is not only evident and widespread but also artistically developed far beyond than it is in music. What would be an equivalent critique to what, for example, Van Gogh did with the landscapes he saw? Schaferians: please let us Schaefferians to have the freedom of a painter.’
The Fibonacci’s sequence is a numerical succession in which every number after the first two is the sum of the two preceding ones: 1, 1, 3, 5, 8, 13, 21, 34, etc. This mathematical pattern appears to be strictly related with the concept of section aurea or golden ratio (two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities). The golden ratio seems to be a mathematical pattern that rules the symmetry of the natural world: we can find it in leaves that grows on trees, the geometric formation of snowflakes, the spirals in pinecones and even in the dynamics of black holes and the shape of galaxies. Many artists throughout the history used the golden ratio to craft their pieces; one form of art in which the golden ratio is very present is music.
Many composers applied this pattern to their composition with a sublime result; one great example is the composition “Apparition” by György Ligeti, that is divided into sections that are proportional to the golden ratio. Even the harmonics of frequencies seem to be organised by the Fibonacci’s sequence. Another interesting musical fact is the arrangement of the keys on a piano: in a keyboard scale of C to C there are 13 notes, with 8 white keys and 5 black keys arranged into groups of 3 and 2; also in an octave there are 13 notes but the scale is composed just by 8 notes in which the 5th and the 3rd notes create the basic foundation of all chords.
The golden ratio is connected to the golden number (1,6180339885…), and endless and irrational figure that doesn’t represent a periodic repetition. In a musical composition that goes from A to B the element F (golden ratio) would represent a a change in the work (a bridge, or an arrangement with a different instrument,…). Also, many musicians used the Fibonacci’s sequence to enhance rhythmical patterns or to establish changes.
One of the most brilliant use of this mathematical patterns in music is given by the song Lateralus performed by the visionary American band, Tool; the syllables of the lyrics are clearly set into Fibonacci’s pattern growing from 1 to 8 and in the 2nd verse from 13 to 3; also the intro of the song finishes at the minute 1:37 that appears to be the approximation of the golden number converted into minute and seconds (1,618- 1 becomes 1 minute+ 0.618ths of a minute= approximately 1 minute and 37 seconds).
Here’s the lyrics of the song’s verses organised in Fibonacci’s sequence:
2 white are
3 all I see
5 in my in-fan-cy
8 red and ye-llow that came to be
5 rea-ching out to me
3 let me see
13 as be-low so a-bove and be-yond, I i-magine
8 drawn be-yond the lines of rea-son
5 push the enve-lope
3 watch it bend
Discovered by accident by the psychologist Harry McGurk and his research assistant John MacDonald, the McGurk effect was described for the first time in a paper written in 1976. The effect demonstrates the connection between hearing and vision in speech perception; it is based on the illusion that occurs when a sound is paired with a visual component of another sound, leading the person to perceive a third sound. This effect can be produced by making a video of someone speaking a phoneme and then dubbing it with the recording of another phoneme spoken; for example, when the syllables /ba-ba/ are spoken over the lip movement of/ga-ga/ we could perceive the syllables as /da-da/; the video in this page shows that even the visible speech can also alter the perception of audible speech sounds when the visual speech stimuli are not matched with the auditory speech: if we keep the same original sound, but we change video of the lip movement, the sound will appear different, following the visual perception.
The effect demonstrates that the speech perception is not just an auditory process, but it’s something that takes information from our unconscious. Also the brain is not always aware of the separate sensory contributions of what it perceives, therefore it can’t always differentiate whether it is seeing or hearing the incoming information.
This phenomenon is very peculiar and it gets even more intriguing, because it seems to change for people who suffer from mental disorders: researchers have discovered that people suffering from autism spectrum disorder (ASD), Alzheimer’s disease, schizophrenia, dyslexia or aphasia have exhibited a small McGurk effect. For people with ASD it has been suggested that the weakened effect is due to deficits in identifying both auditory and visual components of speech; people with Alzheimer’s disease often have a reduced size of the corpus callosum that produces a hemisphere disconnection process, this condition minimise the influence on visual stimulus, reason for the lowered effect. The McGurk effect seems to be less pronounced in people who suffer from schizophrenia, phenomenon attributed to the slower development of audio-visual integration that doesn’t allow it to reach its developmental peak; schizophrenics often rely on auditory cues more than visual cues in speech perception.
This is a very interesting research which shows that the McGurk effect is also stronger when the right side of the speaker’s mouth is visible.
For many artists, noise is considered as a very powerful source of inspiration: from the 20th century Italian pioneer of experimental music Luigi Russolo, until the Japanese artist Merzbow, “founder” of noise music. There’s always a controversial response to noise art, apparently the relation between pure noise and music disorients the audience; the most common reaction when someone listens to Aube (Japanese noise artist), for example, is “This is not music! It’s just random noises!” and then switches to something more melodic.
How did artists consider noise as potential for a new meaning? Why is it so hard to do?
In most cases noise is defined as a disturbance and people is aware of its dangers, such as sleeping disturbs, hearing loss, blood pressure irregularities, increase of stress, etc.; the presence of noise concerns humans, and most of the times noise occurs as an unwanted sound. One true fact is that the perception of noise can change depending on the context in which is heard. ‘If you like your neighbors their music is less noisy. If you dislike or fear them any sound they make is noise, encroaching on you through the walls or over the garden fence.’ (Voegelin, 2010, p 44)
Being defined as an unwanted/unmusical sound, noise automatically generates a Pygmalion effect that blocks people to understand it in a different way; this phenomenon, also called Rosenthal effect, is a psychological circumstance that occurs when expectations lead to enhance something, putting less attention on the rest. ‘Wherever we are, what we hear is mostly noise. When we ignore it, it disturbs us. When we listen to it, we find it fascinating.’ (Cage, 1939, p. 3)
One possible way to find another meaning in noise is to separate noise from the context it comes from and understand that the context is expressed in noise itself. Also humans appear to be more attracted to musical structures (such as pitch and rhythm) and it can be hard to appreciate the pure sound, free from any rule or convention.
- Voegelin, S. (2010) Listening to Noise and Silence– Towards a Philosophy of Sound Art, The Continuum International Publishing Group Inc.
- Cage, J. (1939) Silence: lectures and writings. London: Marion Boyars