We’re halfway into the 2012 Summer Olympics and the U.S. athletes have certainly done America proud. Take Michael Phelps, honoring the U.S. with his 18th gold medal, Gabby Douglas standing proud and bemedaled on the award stand, and 17-year-old boxer Claressa Shields beaming with golden grace.
But let’s have a look at one skill the U.S. athletes have yet to master…the Cockney accent. In an attempt to pay homage to their London hosts, several Olympians displayed their affection with a resolute, yet miserably hopeless, attempt to adopt their hosts’ native accent. If you want to laugh along with Team USA, check out their admirable, albeit unsuccessful, attempts. As much as I’m rooting for soccer player Heather O’Reilly, a gold in Cockney Accent certainly isn’t in the making.
In the athlete’s defense, the phrases chosen for the task are hardly “textbook”. They’re a wonderful slice of Cockney idioms, phrases that typically rhyme with the word the person wants to say; for example, ‘telephone’ becomes the phrase ‘dog and bone.’ The athletes, then, have to contend with a double whammy: getting their tongues around Cockney vowels and consonants, and their heads around the meaning of the phrases. The following idiom, and one that stumped Olympian after Olympian, makes the case: “Would you like some John Cleese with your uncle Fred, or just a little bit of talk and mutter?” actually means, “Would you like some cheese with your bread or just a bit of butter?”
Like accent learners of any language, the 2012 Olympians’ attempts demonstrate the difficulty of trying to acquire a new accent simply by using a “repeat after me” methodology. It doesn’t work, and especially not for adult learners. Given the neural wiring of our brains, adults need specific instructions. We need to be taught where to place our tongue, teeth, jaw, and lips to pronounce new sounds with which we may not be familiar. We need to be shown what it looks like and feels like…in effect, to “see” and “feel” a sound. Can it be done? Absolutely!
There are thousands of non-native English speakers who have successfully completed American accent training…and each one of them deserves a medal!
*More examples of Cockney slang.
“Metaphors are much more tenacious than facts.”
~ Paul de Man
Believe it or not, despite our varying degrees of poetic ability, we all try our hand with the poet’s most time honored literary device, metaphor. Metaphor is a technique used to transfer the qualities of one word to another. It seems complicated, but really it’s not. For example, does the following sound familiar? “She has a heart of gold.” Or, “her eyes were shining stars.” Of course, the woman’s heart isn’t really made of gold; nor are her eyes actual celestial bodies. But you get the idea that she’s a kind person with bright and alluring eyes. “Eyes” are, in fact, one of our most favorite ‘metaphorical’ words. Here are a few global perspectives (get the pun?):
“When soldiers have been baptized in the fire of a battle-field, they have all one rank in my eyes.” Leopold Bonaparte (French)
“The hardest thing to see is what is in front of your eyes.” Johan Wolfgang von Goethe (German)
“When I look into the future, it’s so bright it burns my eyes.” Oprah Winfrey (English)
A surprising amount of the English language is rooted in metaphorical associations. Anger is linked to fire, as in “inflammatory remarks” or “consumed by rage.” Love is linked to war, as in “love is a battlefield” or “she fled from his advances“. Another common contrast is “up” versus “down,” where “up” is “good” and “down” is “bad.” It’s usually good when we “grow up”, “stand up”, “rise up”, “team up”, “show up”, and “1-up”, but not so good when we “stand down”, “throw down”, “show down”, “burn down”, or “fall down”. We throw our hands in the air when profits “go up”, but our faces drop when profits “go down”.
One way English speakers can help smooth out communication in multicultural contexts is by using literal ways of talking rather than metaphorical, or non-literal, ways. For non native English speakers, phrasal verbs are often the most difficult to understand. A phrasal verb is when we take a word like “make” and add an adjective like “up”. This creates the phrase “to make up”. The reason verb phrases are confusing is because they often have more than one meaning. In this case, “to make up” can mean to fabricate, to re-do, or to reconcile. How about the phrase “to make out”? Can you think of at least three meanings?
Mastering a second language can be a challenge. There’s a lot that goes into it: grammar, vocabulary, reading, writing, and pronunciation. Here’s a tip English speakers can use to make the process easier for others: use as much direct, literal speech as possible. This can take some mindfulness and practice. Americans, including me, love indirect speech. In fact, we use non-literal phrases around the clock, night and day, and 24/7! But if we can get into the habit of using words like “arrive” instead of “show up”, and “give” instead of “hand out”, it’s more likely our message will be understood with ease and confidence. How do we know it’s time to use more poetic license? Listen for when the other person starts using idiomatic expressions and non-literal phrases. When you recognize them… follow their lead!
“Reality is merely an illusion, albeit a very persistent one.”
~ Albert Einstein
As it turns out, speaking a language isn’t just about using our ears and mouth. Our eyes play an integral role during conversation – and I don’t mean in terms of interpreting body language or unspoken messages. I’m talking about pronunciation. In some cases, your eyes are even more important than your ears.
Take, for example, the McGurk Effect. This phenomenon is a perfect example of the role vision plays when processing the sounds of a language. Here’s how it works:
Imagine closing your eyes and hearing a recording of someone saying the sound, “da”. However, when you open your eyes, and hear the recording again, you’re shown a video of someone making the “ba” sound. You know what often happens in this case? When watching the video of “ba” while hearing the sound “da”, people say they hear “ba” rather than “da”. It’s an aural illusion! Even when you know what’s going on, it’s hard to make your brain hear “da” while your eyes see “ba.” (This is the stuff psycholinguists live for!)
Here’s another way our eyes can fool us… it’s called the Stroop Effect. Most of us reading this would have no trouble saying the following aloud: red blue green orange.
Nor would we stumble over: red blue green orange. But a good many of us will pause a moment before saying: red blue green orange!
Why? Because speaking, with correct pronunciation, involves using every major area of brain functionality. Neurons from the regions responsible for each of our five senses–hearing, touching, smelling, tasting, and seeing–need to communicate with one another. This is a whole brain, and difficult, process. And one, by the way, that may be well worth the effort.
It appears that learning a language may be an exceptionally effective tool for delaying the onset of Alzheimer’s. As Meredith Melnick reported for TIME Healthland:
“The key may be something called cognitive reserve. Learning and speaking two languages requires the brain to work harder, which helps keep it nimble… the idea is to help the brain create and maintain more neural connections. Brains with more cognitive reserve – and therefore more flexibility and executive control – are thought to be better able to compensate for the loss of neurons associated with Alzheimer’s.”
In fact, Ms. Melnick notes that with each language learned, the longer the adult is likely to delay the onset of significant memory loss. She notes that trilinguals were three times less likely to have cognitive problems than bilinguals; quadrilinguals and other polyglots were five times less likely to develop cognitive problems.
This strikes me as a pretty good reason to learn a foreign language. And just like each of the five senses are critical to the process, so too are each of the five areas to creating true fluency: grammar, vocabulary, reading, writing, and… don’t forget, pronunciation!
If you’ve never been to the website “Dr. Goodword” (email@example.com), I strongly recommend taking a look. I stumbled upon Dr. Goodword six years ago when I was looking for ways to help my son prepare for his SAT. Lo and behold, Dr. Goodword was it. Every morning he received a ‘word of the day’ – some uncommon jewel of the English language. The entry came replete with the word’s etymology, pronunciation, and examples of how it’s used today.
I continue to receive my daily dose from Dr. Goodword. It’s wonderful. …One entry (July 30) is too good to keep to myself and I just had to write about it! The word was Echolalia, and it’s profoundly important to anyone who’s trying to learn the American accent, or any speech pattern for that matter.
Echolalia is essential to one of the most critical stages of early language acquisition. Echolalia is the action of repeating the sounds and words spoken by our caregivers and, later on, by our teachers. For those of you who are parents, do you remember the days when your toddlers parroted your every syllable? While some of those early attempts were a little off the mark, in time those first words began to sound just like ours. Dr. Goodword, by the way, seems to feel that the “lalia” part of echolalia is probably onomatopoeic…meaning it sounds like the word it represents. In this case, “lalia” refers to the la-la-la of speech. Echolalia, then, means to repeat that which is spoken.
Interestingly enough, at about the same time Dr. Goodward hit ‘send’ on his echolalia entry, an article by David Robinson ran in New Scientist magazine entitled, Kiki or Bouba: In Search of Language’s Missing Link. Robinson suggests that humankind probably invented our first words using an onomatopoeic process called “sound symbolism”. Robinson proposes that our ancestors invented new words by shaping their mouths to mimic the shape of the objects they were trying to name. To prove this, Robinson cited the work of Vilayanur Ramachandran and Edward Hubbard who ran what’s now called ‘The Kiki/Bouba Experiment’. Here, people were given the two words, ‘kiki’ and ‘bouba’, and were asked to match them to two different objects. One of the objects was spiked, the other curved. Ninety-five percent of the people labeled the spiked object “kiki” and the curved one “bouba”. Interesting that our lips are horizontal (like the spikes of an object) when we say “kiki” and rounded (like a curved object) when we say “bouba”. To further support Robison’s theory, recent studies at the University of Maryland confirmed that the majority of children learn new words better if they are sound symbolic.
This is great news for our accent reduction specialists at ARI. We’ve known for quite some time that mimicry plays a key role in learning new pronunciation patterns. What’s exciting is the treasure trove of new data that continues to support ARI‘s methodology for teaching and learning the American accent. Core to the Ravin Method® is the idea that visual cues are critical when it comes to learning pronunciation. Our brains are hard-wired to mimic not just sounds, but the shapes that our tongue, teeth, lips, and jaw make when producing each sound of any given language. But beyond methodology, I love the way current research keeps going back to the basics: we all learn language the same way. We all can make every sound in the human family of languages. Whatever accent we bring to the table, humankind follows the basic patterns of communication. And isn’t that what language is all about?
It’s easy to see how language can influence perspective. For example, the Chinese word for ‘tragedy’ conveys not just a sense of disaster, but also the idea of opportunity. In other words, good can occur out of bad situations. While many people in the West may agree, this particular view is not implicit by definition.
In this month’s issue of Scientific American, Lera Boroditsky takes the connection between language and thinking one step further. In her article, How Language Shapes Thought, Boroditsky gives numerous examples of how the words we use affect not only what we think, but how we think. Specifically, word choice determines the way we process information. That’s new… and her examples are nothing short of fascinating!
For example, an experiment was conducted whereby people from a variety of language backgrounds were asked to find their way out of an unfamiliar building. Which language speakers did the best? An Aboriginal community in Australia who speak a language called Kuuk Thaayorre. Rather than using spatial terms like ‘left’ and ‘right’, they talk in terms of absolute cardinal directions; “the pen is southwest of the paper” or “Sue is sitting north of John” for example. Boroditzky cites that these speakers’ ability to keep track of spatial locations are “better than scientists thought humans ever could have.”
As it turns out, language use seems to affect nearly every area of cognition, from spatial recognition to memory to color identification to the ability to learn mathematics. We used to believe that thinking shapes language. But cross-linguistic differences clearly demonstrate that language shapes thinking. What does this mean for the adult learner? How we process and use new information depends on what, and how, we speak.
This has a profound impact on ‘best practices’ for ESL speakers and students of English pronunciation. As we know, a simple ‘listen and repeat’ methodology doesn’t work. And while requiring students to look at visual cues is important, this is only one piece of accent training. Explicit, verbal explanations of what to do with the tongue, teeth, lips, and jaw are what completes the picture. Why? Because as socio-linguists tell us: “there may not be a lot of adult human thinking where language does not play a role.”
Have you ever wondered how languages are made? Who invents all the rules anyway? Are we really ‘hard wired’ for language acquisition or is it something we learn if given the right set of circumstances? Or both?
It’s rare that we get to see the birth of a whole new language…one that develops completely naturally, without any help from role models or teachers. But that’s exactly what happened in Nicaragua in the early 1980′s, and it gives us great insight into the ‘nature’ vs. ‘nurture’ question of language acquisition.
Prior to the early ’80′s, most deaf children in Nicaragua had little or no contact with other deaf children or adults. Their means of communication were limited to a set of ad hoc gestures that ‘made sense’ to family members. But when the government opened its first school for the deaf in Managua, all that changed. Two hundred children had the opportunity, for the first time, to convey their thoughts, feelings, and ideas as fully as any hearing child in Nicaragua, or around the world. But first they had to create the means to do so…
According to Ann Stenhgas of the Language Acquisition Development and Research Laboratory in New York City:
“…as (the children) interacted, they began to change the gestures and home signs they were using. Their vocabulary grew quickly over those first few years, just like when a little child learns to talk. Their signs became more systematized, more regular, and less gestural. The structure of signed sentences became much more complicated. By the time this generation became adults, at the end of the 1980s, their signs were rapid and fluent. The language had grown to resemble other languages around the world. It could now express ideas as complex as any other language.”
It joined the family of more than 6,300 human languages and is called Nicaragua Sign Language, or NSL.
Languages share an essential, bottom-line characteristic: they’re governed by as strict set of intricate rules. But who ‘makes up’ these rules? I think the answer comes from the children of Nicaragua. Communities of people who need to interact in similar ways not only create language, they can do so in less than a decade. Wow!
Sometimes our accent reduction learners are amazed by the number of pronunciation rules that govern the American accent. At the beginning, it may seem overwhelming. But I’m on the side of “we’re hard wired for language acquisition” – all aspects of it. Language learning is part of our universal, human experience. And just as a decade is more like a nanosecond with respect to inventing an entire language, 15 hours of instruction (our standard English pronunciation training program) is a blink of the eye.
It’s not an easy task to learn a new language, but I think we can all be inspired by the creators of NSL. For those who are mastering a second language, rest assured that we’re programmed for success.
Watch the story on YouTube about how Nicaragua Sign Language came into being