Untangling the mass of sound

by Elyse S. Sussman

0

Let’s suppose we are taking a stroll through the congested streets of a city centre. Perhaps we don’t notice, but we are always “bombarded” by sounds: honking horns, chattering and shouting, the trilling of mobile phones, music blaring at full volume from clothes shops. But, in spite of all this din, we are always able to pick out the voice of a friend calling us from the other side of the road. How? It is thanks to our brains which can isolate the sound waves that hold most interest for us.  Elyse S. Sussman, Director of the Cognitive Neurophysiology Laboratory at New York’s Albert Einstein College of Medicine, has studied this complex neural mechanism for many years.

How do we manage to untangle the mass of sound that surrounds us?
In actual fact we listen more with our mind than with our ears. What is at stake is the need to give meaning to what we perceive. Everything comes from the vibration of the eardrums that are converted into electrical impulses and sent to our brain. At that point two levels come into play: an automatic system that catalogues these stimuli according to physical characteristics (frequency, spatial location intensity and timbre), and an attentive system which selects the sounds with the aim of transforming them into something that has meaning. This means we can understand whether there are two or three cars in the street, if one of them needs its brakes fixing, or whether there is wind coming from far away. No computer is able to equal the results of such a complex mechanism.

Attention seems to be fundamental …

It is attention that enables us to interpret this input which would otherwise remain a senseless sequence of sounds. For example, it allows us to resolve perceptive doubts deriving from ambiguous signals or sounds distorted by loud background noise. Attention even allows us to drag up sounds from our memory that we think we have never heard. How many times, when we are concentrating on something, our partner might ask us a question: initially we are caught on the hop and are unable to respond. A few seconds later (up to a maximum of 30) we remember what he or she asked us.

But not all sounds are the same. Why is it easier to distinguish two human voices than two musical instruments?

In an orchestra, each instrument contributes to the overall melody and harmony by playing its notes in sequence within a broad range of tonal frequencies. There are multiple parallel melodies which combine to form a harmonious result, thanks to the composer. This is why it is hard to distinguish two instruments. Human voices, however, don’t usually overlap so well in terms of harmony or rhythm. I should also mention that for human beings, words and speech have a very special meaning: this is why we are always very good at intercepting them.

We’ve talked about sounds. But how is silence interpreted?
Silence is part of the structure of sound, and at cortical level it is codified like a violation of the rhythm of a sequence. It is often forgotten by those who study how sound is processed, but it gives us important information about how the elements of sound are linked: just think of the sound of footsteps or the words in a sentence!

Noise pollution is on the increase. What are the consequences?
Cochlear mechanisms are built to handle constant stimulation. We can close our eyes to rest from visual stimulation, but we can’t “close” our ears. Unfortunately, Nature didn’t factor in those invasive technologies like mp3 readers. The risk for young people playing these things at high volumes all day is that they could have impaired hearing.

And in the most serious cases?
Those with hearing problems usually have trouble isolating individual sounds from the context, like a group of voices in a crowded restaurant. Of course, there can be various reasons for this: some of the damage is physiological and has to do with peripheral mechanisms like the middle or inner ear; others are real cognitive disorders related to serious illnesses like dyslexia, autism or schizophrenia.

You also study the way in which children perceive sound. Why are young children so good at learning sounds they have never heard before?
Every child is born with the innate ability to learn any language, and this is easy to see in the case of children whose parents speak different languages. Despite this, during the first few years of life, our neural systems adapt according to our interactions with our environment. This is why, when we try to learn another language in adulthood, we realise that things aren’t that simple. It is as if our brain was “optimised” for our native language. After a certain age, it becomes impossible to learn a foreign language without losing your original accent. There are also certain sounds that remain particularly obstinate: for example a native Japanese speaker tends to mix up the /l/ and /r/ sounds.

Interview by Mauro Scanu

Elyse S. Sussman

0 Comments

Add comment

Leave a Reply

Your email address will not be published. Required fields are marked *

*


*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Copyright 2014 | Privacy policy | Terms of Use | illywords
illycaffè spa con unico socio, capitale sociale € 30.000.000,00 i.v, Sede legale Via Flavia 110 - Trieste, Codice fiscale Partita iva e iscr. reg. imprese Trieste n. 00055180327