Monday, December 31, 2012

Can't see how to say it right? (Self-reflective, visual-soma-kinaesthetic correction of mispronunciation)

So you try to demonstrate with your face and mouth how a learner should be pronouncing a vowel, for example--and it simply does not work. In fact, the mispronunciation may just get worse. New research by Cook of City University London, Johnston of University College London, and Heyes of the University of Oxford (Summarized by Science Daily) may suggest why: visual feedback of the difference between one's facial gesture and that of a model can be effective in promoting accommodation; simple proprioceptic feedback (i.e., trying to connect up the correct model with the movements of the muscles in your face, without seeing what you are doing simultaneously) generally does not work very well. Amen, eh.

I have had students whose brains are wired so that they can make that translation easily, but they are the exception. The solution? Sometimes a mirror works "mirror-cles;" some new software systems (noted in earlier blogs) actually does come up with a computer simulation that attempts to show the learner what is going on wrong inside the mouth and what should be instead--with apparently very modest, but expensive results.

Clip art: Clker
Clip art: Clker
The EHIEP approach is to early on anchor the positioning and movement of the jaw and tongue to pedagogical movement patterns of the arms and hands. From that perspective, it is relatively easy, at least on vowels, stress and intonation (and some consonants) to provide the learner with both visual, auditory and proprioceptic feedback simultaneously, showing both the appropriate model and how the learner's version deviates. (In fact, in some correction routines, it is better to anchor the incorrect articulation first, before going to the "correct" one.) In effect, "(Only if) Monkey see (him or her mis-speak), (can) Monkey do (anything about it!)"


Saturday, December 29, 2012

Spirituality the key to pronunciation teaching?

Clip art: Clker
Absolutely . . . were you a practitioner of Traditional Chinese Medicine (TCM) doing English for medical purposes (EMP) pronunciation classes on the side, according to a study by Shi of Beijing Normal University and Zhang of Southwest Minzu University, as summarized by Science Daily. The holistic, mind-body-spirit approach of TCM is said to account for its effectiveness, so unfathomable to much of Western medicine.

In the West, in language teaching, we get the mind-body idea, at least in theory, but the whole notion that we might have to throw in a little spirituality as well does not sit well with most "Post-modern-post-methodology." There is currently a strong resurgence of interest in spirituality in higher education which will inevitably translate into and influence language teaching as well.

Of course, one of the earlier "affective" methods, Counseling/community language learning, was created by a Catholic priest, Charles Curran, with a very much Christian-centered spiritual growth model at its core. Likewise, for those of us who teach at faith-based institutions, something analogous to the holistic TCM perspective on spirituality is pretty much business as usual--or at least should be! What bringing spirituality into the mix does, in part, is to make the Cartesian mind-body distinction or separation even more irrelevant to effective instruction and learning, transcending both and requiring that the learner be the primary focus, not his or her language. Talk about embodiment (or incarnation)! Try it sometime . . . at least in spirit!

Friday, December 28, 2012

Pronunciation feedback: the quicker, the better?

Clip art: Clker
The research on corrective feedback in language learning is extensive. The conclusions are a mixed bag, at best. I do not recall seeing a study that measured the effect of anticipated feedback, whether immediate or postponed in the literature. I had missed this 2010 study by Kettle and Haubi of the University of Alberta (summarized by Science Daily) where they looked at the impact on test scores when subjects thought that they'd be given immediate, rather than delayed feedback. Those in the first group estimated their performance to be somewhat lower but did better than the latter group, which tended to overestimate their final score.

Pronunciation feedback in general tends to be more postponed, often in the form of notes to the student or critique of audio recordings. In class, real time responses to mispronunciations are less in fashion than during earlier periods when oral accuracy was strongly promoted and learners were often pressed to speak more in class and be immediately corrected than today. In survey after survey, learners desire more spontaneous correction and instructors appear less and less likely to comply.

Clip art: Clker
A solution to that, one that we (and many others) have developed in haptic-integrated work, is to use a set of gesture-based signals (perhaps including a vowel number) to alert the learner effectively to problematic pronunciation without requiring excessive public performance. That way the learner can immediately note the problem and either deal with it "internally" or go back and work on it later, perhaps talking with the instructor or another source privately, if necessary. Just the impact of that anticipatory attitude on motivation, according to the research, is worth the cost of tuition. Can hardly wait, eh?

Thursday, December 27, 2012

The pitch for teaching prosody first

Clip art: Clker
There are numerous examples of methods where either intonation is taught first in pronunciation work or shortly thereafter using techniques such as "reverse accent mimicry" or computer assisted verbal tracking or imitating actors without attending to the meanings of words. Anecdotally, they all seem to work. From a research perspective, intonation or pitch change has been employed extensively in exploring neuroplasticity, the ability of the brain "learn" and adapt. For most learners, mimicking simple pitch contours in English is not that difficult. If you examine student course books, what you find is that they all include pitch contour work but where it occurs and how much is done seems completely random.

A new study by Sober and Brainard of UCSF (summarized by Science Daily) of how song birds correct their singing draws an interesting conclusion: they fix the little mistakes and ignore the big ones. The Bengalese finches provide us with an intriguing clue as to how to organize L2 pronunciation work as well: begin with the easy stuff--not the messy articulatory problems or complex phoneme contrasts or conflicts. The arguments for establishing prosody (intonation, rhythm and stress) first are compelling at one level (theoretically) but from the perspective of measuring tangible progress, it is still difficult at best to demonstrate what has been learned, given the tools we have available today.

Children clearly learn prosody first. (In the EHIEP system intonation is now in module four but I am considering introducing it earlier, in part based on this research.) Practically speaking, doing early prosody work is relatively straightforward and not costly. You can do it for a song, in fact.  

Sunday, December 23, 2012

Sound discrimination training: perceived "phon-haptic" distance

Clip art: Clker
Clip art: Clker
Ask any Japanese EFL student how they managed to perceive and later produce the distinction between [i] and [I] or [u] and [U] in English and they'll probably tell you that it was difficult . . . or impossible. The same goes, of course, for L1/L2 phoneme mismatches for most learners, at least initially. The problem, of course, is the "competition" between phonetic or articulatory distance, that is how different, physically it is to produce two sounds, and phonemic categorical distance. If the brain "decides" that two sounds represent the same phoneme, regardless of how different it "feels" to produce them--case closed. At least that is what most research suggests. A 2004 study by Gerrits and Schouten of Utrecht University (linked here at the University of Rochester) suggests that the task used in the discrimination process can significantly impact perception of phonemic categories.

In plain English, what does that mean? Basically this: The method you use to assist learners in hearing or producing a phonemic distinction in their L2 can, itself, affect whether they get it or not. Really? Well, maybe . . .  So how do you usually do that? Do a class listening discrimination task of some kind? Give them an audio to listen to? Show them line drawings and have them repeat after you? Sit down with the learner and use a Starbucks coffee stir to get their articulators realigned?

 As described in earlier blogposts, the EHIEP approach is to establish points in the visual field where the hands touch as the sound is articulated, what we term "phon-haptically." Those points, or nodes, are strategically placed so that distinctions such as those above are experienced as being both physically distant from each other and somatically have very distinct texture or type of touch involved (tapping, pushing, scratching, brushing, twisting, etc.) The touch-type is chosen to "imitate" the felt sense of producing the vowel in the the vocal tract in some way, if only metaphorically. Does it work? Try it and let us know. Keep in touch. 

Saturday, December 22, 2012

The "Mudder" of all pronunciation programs

Image: Toughmudder.com
Have been trying to figure out what pronunciation systems or books are the big sellers for some time now. As you can imagine, those #s are not easy to get at. The plan is to evaluate some of the top programs for embodiment or "physical presence." Have done some preliminary analysis on a few of the student books from the big publishers which I will use as blog fodder later. ("Blog fodder" . . . nice term there, too.) The idea is to develop a more elaborated framework for applying principles of haptic anchoring to commercially available speaking, listening and pronunciation books. Will begin reporting on that project in a couple of weeks.

The basic EHIEP system provides the orientation to the sound system necessary and a set of techniques to use in working with it, but the point is that those "tools" are then ready to be applied to texts and vocabulary in content-based instruction--where the real pronunciation change actually "happens." With such integrated pronunciation instruction now the "flavor of the month," perhaps the day of the free-standing "Mudder of all pronunciation programs," with its wonderfully clear-cut "follow the yellow brick road" syllabus is over.

In part as a consequence, pronunciation methodology has become progressively more complex, nuanced and messier as the theoretical and pedagogical waters have muddied. Particularly for the less experienced instructor, doing pronunciation can appear to be nothing short of a very "tough mudder" at best. But it need not be. 

Thursday, December 20, 2012

Situating pronunciation practice with "directed thinking?"

Clip art: Clker
So how do you get learners to regularly practice their pronunciation, either as homework or using self-directed spontaneous strategies? There are many approaches, from pleasure to pain, but the most widely tried are strategies such as "getting them to focus on the L2 identity or think about either why they should do it or what their desired outcome will be." Turns out those approaches may not be the best way to do it.

In a  2007 study of strategies for enhancing exercise engagement by sedentary college sophomores by Eyck, Gresky and Lord (summarized by Science Daily), it was found that "directed thinking" about what they could do to increase the likelihood of their being able to do their conditioning routines, that is the actions they could take to facilitate that activity--rather than why they should (desirable outcomes) or the exercises, themselves--produced significantly better results. They had been instructed to first create a list of such beneficial or enabling activities that they could do and then daily, at a regular time, mentally review the list for eight weeks. Exercise persistence and increased levels of conditioning followed.

Perhaps most importantly, the approach of Eyck et al. addresses what are often the most common impediments to practice: scheduling conflicts and manageable "temptations." (May be one reason I have worked with so few "fossilized" accountants over the years!) Having learners plan their week's practice in class is often effective, as is working with the pragmatics of "context management," i.e., how to set up people around you to practice on. From that perspective, there should be no excuse for no practice. 

Good to great pronunciation: the "happiness" model

One of the most challenging aspects of pronunciation work is the "meta-communicative" function of appropriately identifying change and then predicting what is next. I was struck by the analogy between that process and aspects of this 2012 study by Sheldon of University of Missouri-Columbia (Summarized by Science Daily) that suggests that sustaining happiness involves two main factors: " . . .   the need to keep having new and positive life-changing experiences and the need to keep appreciating what you already have and not want more too soon." (The validity of the study may, of course, be compromised by the fact that it involved 481 subjects living in the Riverside, California area . . . )

The criteria underlying that definition of "happiness" are wonderfully revealing, culturally "Californian" and near debilitating. Evolving pronunciation may not be correlated with many positive "life-changing" experiences, but the question of instructor and learner awareness of what the process is and how it is going is often crucial, especially at points such as the move from "good to great." (Collins' 2005 book, Good to Great, a business classic, describes that general threshold well.) In other words, it is often not the target that is the problem, but the surreal expectations involved. Western teaching methodology in general too easily relies on motivation to finish the job--or take responsibility for failure.  

There was a time, of course, when the bar of native speaker-like pronunciation was set impossibly high--for any number of reasons-- but at least it did give one a scale to work with.  But now that at least some (informed theorists and teachers) have accepted the target of "intelligible" speech, it has become easier to "appreciate what you have and not want more . . . " 
Clip art: Clker

Until there is considerably more change in societal attitudes and human nature, however, problematic pronunciation may still interfere with the need for positive, life-changing experiences, like going from a good job to a great one--or from English class to any job. You and your students happy with that? If not, what do you expect? More importantly, what do you expect them to expect? 




Tuesday, December 11, 2012

Love of fatigue-inducing drill and perfect pronunciation

Clip art: Clker

Clip art: Clker
There had to be a term for it. From a 2011 study by O'Hara, FRCS, summarized by Science Daily: "functional dysphonia (FD), a voice disorder in which an abnormal voice exists with no vocal pathology." Two of the key contributing factors were excessive perfectionism and fatigue. Apparently the symptoms of FD can be of several types from change in voice pitch to serious pain. Had any perfectionist students in your classes that (nearly) burned themselves out striving for an unachievable native-speaker model? What that suggests, of course, is not that the targeted model or accent is the sole source of the problem as much as the perfectionist attitude of either the learner or the methodology. Some earlier structuralist or audiolingual pronunciation approaches do, in retrospect, seem to fit that profile. The contemporary default response of resorting, instead, to ad hoc "near peer" models (although they may have the edge on almost everything but desired accent, according to Bernat) or conscious decisions to stop short of what is considered "acceptable pronunciation" by the learner on similar grounds (of fluency or shift in priorities) is probably not the answer either. Talk about functional "dys-pronunciation" . . . 

Monday, December 10, 2012

Giving pronunciation a bad name?

Clip art: Clker
Clip art: Clker
What you call it does, of course, make a difference, e.g., EHIEP or HICPR! But how's this for a concluding line to an abstract: "This work demonstrates the potency of processing fluency in the information rich context of impression formation." There are, of course, a plethora of potential reasons that a name or term may appeal or stick quickly, other than just "easier to pronounce," the focus of the 2012 study by Laham, Koval and Alter in the Journal of Experimental Social Psychology. That effect was evident irrespective of " . . .  name length, unusualness, typicality, foreignness, and orthographic regularity." In other words, if subjects (simply) reported that a name was easier to pronounce, for the most part that seemed to be based on ease of articulation and  perhaps a bit of "sound symbolism" thrown in.

The more interesting implication of the study, however, is the claim that ease of articulation translates into ease of processing fluency--and more favorable impressions or ratings for the bearers of the names, whether of a person, place or thing. So how is that for a criterion for vocabulary selection and sequencing? Begin with more positive words that are easier and more pleasurable to pronounce; hold off on the nasty consonant clusters and idiosyncratic intonation contours until later: what can be more easily pronounced will be encoded and recalled . . . better.

 At least it suggests that in the  process of targeting a specific vowel or consonant that, all things being equal, the anchoring and practice should not dwell on words that are overwhelmingly negative or which contain problematic articulation, despite the intrinsic "vividness" and affect "punch" involved. I had a somewhat "cynical"  colleague who taught pronunciation almost exclusively in the context of pollution--and who was always puzzled that his students were not more positive about the improvement in pronunciation that (should have) resulted. Based on this study, I suspect that the something of the combination of the grim topic/presentation and encountering terms such as "environmental" or "toxic" early on may have helped give his class a bad "name" . . .


Sunday, December 9, 2012

L2 "Speech-ture:" Why anchoring pronunciation change with gesture works

Clip art: Clker
I am often asked why associating a new sound or word with a gesture is an effective and efficient method for changing pronunciation. There was a time when a response of "just common sense" or "30 years of classroom experience"was adequate--no longer. Now the retort is "Well . . . show me your fMRI!" Research just published in Plos One by Straube, Green, Weis and Kircher entitled, "A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study," almost does just that.  In essence, the study demonstrated that at the highest level ("supramodally," that is spanning or connecting modalities), gesture and speech production are initiated neurophysiologically in the same neural network--but "after" meaning.

The apparently "obvious" distinction between the meanings inherent in verbal and nonverbal expression is a false or at least very complex one: they both seemingly emanate from the same source. That is consistent with Damasio's notion that the "feeling" (or unconscious intuition or meaning) in some sense comes before the words or cognitive embodiment. More precisely perhaps, in real time, once a meaning has been chosen to be expressed by the brain/mind, appropriate body movement and speech associated with that concept or unconscious response are then activated by the same governing network. It is almost as if we need a new term here, something like: "speech-ture."

That may be one reason why haptic anchoring of L2 sound by L2 learners can work: the sound is associated with a unique "speech-ture," not that of the L1 or the current interlanguage, transitional form that may still be less than comprehensible in context. In part for that reason, in haptic-integrated work, to "correct" a mispronunciation, the emphasis or conscious focus is on the pedagogical movement pattern, not the sound or auditory image produced by the learner at the time. That is perhaps the defining (or most innovative) feature of haptic-integrated clinical pronunciation work. Regular practice of the PMP (and the accompanying oral production) for a week or so, independent of the use of the associated sound in a word or context, should generally establish the "correct" or approximate target sound(s)--with little or no further intervention from the instructor. Not infrequently, in fact,  a learner can initially produce the "correct" sound using its PMP but still not be able to hear the difference or change . . . in a manner of "speech-turing!"


Saturday, December 8, 2012

Your pronunciation teaching "going downhill?"

Clip art: Clker

Then some advice from a prominent ski instructor, Robert Forster,  may be just what you are looking for: " . . . stretching [is] the single most important thing people can do for body health maintenance . . . connective tissue shortens with time . . .  We stretch to maintain good alignment of the bones." Most pronunciation instructors would agree that stretching out the muscles of the mouth makes sense but what about all the rest of the muscles of the upper body (and even "lower" body)  involved in speech production that need to be re-oriented for doing new sounds? There are quite a few of the roughly 630 in the body as a matter of fact, especially if you take your haptic-integration seriously, that must be engaged. 

If you are not yet a regular stretcher, just to get you ready for the day, begin with a whole body yoga-type routine, like this one from Biosnyc. And from them, to stretch most everything needed for fluent speaking, other than the mouth muscles,  just do the Cobra, Cow and Cat and you'll be ready to haptic. For a good model of the desired outcome of a good vocal tract warm up, watch this with one by opera singer Jayme Alilaw. By the time she is done, not only her vocal track but her psyche is ready as well. 

Notice Forster's second point about connective tissue "shortenlng with time." The articulatory complex of muscles that produce a sound are no exception, even within a native speaker. To improve public speaking performance, for example, virtually all of the responsible muscles have to be re-activated and stretched beyond their normal speaking range of motion--before they can be retrained. Pronunciation work is no exception. Warm ups can go from me doing the relatively laid back, basic EHIEP warm up to  . . . well. . . .Marsha Chan!





Friday, December 7, 2012

Disgusting mispronunciation

Clip art: Clker
If there is one unassailable tenet of contemporary language and pronunciation teaching, it is that risk taking and the inevitable miscues and errors which occur are very good things. Furthermore, only mistakes interfering with "intelligibility" should be attended to, the others left relatively untouched. What "minor" differences between the L1 and L2 remain are at least not the responsibility of instruction and to many theorists are near "illegal" to either point to or even react to. In other words, pronunciation errors are for the most part a strong positive, and learners and society at large should not see or experience them as negative--unless you are still for some reason interested in actually changing or correcting them, one of the implications of research by Sherman and colleagues of the Kennedy School of Government, summarized by Science Daily.

Clip art: Clker
In that study, it was found that subjects who were higher in the personality trait of sensitivity to "disgust" were by nature better able to perceive degrees of difference in objects positioned in the light~dark spectrum. (Light~dark being associated in most cultures with pure and impure.) The effect was not apparent with other personality traits such as sensitivity to fear, etc. In other words, to detect an error or difference requires an appropriate degree of affective or emotional indexing. I think it is safe to at least speculate that the opposite effect "works" as well: encourage love of errors (or suppress negative reaction to them) and learners ability to attend to them or monitor them erodes correspondingly.

Not to sound like a "purist" here, but could it be that some of the current, renewed interest in pronunciation teaching, especially segmental (vowel and consonant) change, is but an unintended consequence of the profession's often uncritical attitude toward "error-ing"?  Disgusting . . . 

Wednesday, December 5, 2012

Effortless learning of the iPA vowel "matrix" of English?

Image: Wikipedia
Could be, according to 2011 research by Watanabe at ATR Laboratories in Kyoto and colleagues at Boston University, as summarized by Science Daily--using fMRI technology in the form of neurofeedback tied to carefully scaffolded visual images. Mirroring what appears to go on in real time, in the experiment it was evident that " . . . pictures gradually build up inside a person's brain, appearing first as lines, edges, shapes, colors and motion in early visual areas. The brain then fills in greater detail to make a red ball appear as a red ball, for example."

This is an intriguing idea, something of a "bellwether" of things to come in the field, using fMRI-based technology joined with multiple-modality features to facilitate acquisition of components of complex behavioral patterns. The application of that approach to articulatory training alone, assembling a sound, in effect, one parameter at a time, just the way it is done by expert practitioners--should be relatively straightforward.

Clip art: Clker
The EHIEP vowel matrix resembles the standard IPA matrix on the right, except that it is positioned in mirror image and includes only the vowels of English. In training learners to work within it, we do a strikingly similar build up to that identified in the study, lines < edges < shapes < motion (which is different for each vowel.) Each quadrant is then given a colour that corresponds to something of the phonaesthetic quality of the vowels positioned there. Once the "matrix" is kinaesthetically presented and practiced, it is then gradually, haptically anchored as the vowels are presented and practiced using distinct pedagogical movement patterns terminating in some form of "Guy or Girl touch" for each as the sound is articulated.

Out of the box? Not for long, my friends!


Tuesday, December 4, 2012

Easing the pain of pronunciation work . . .

Clip art: Clker
With a little empathy, trust and T.L.C. apparently. According to 2011 research by Michigan State University researcher Sarinopoulos and colleagues, summarized by Science Daily, "The brain scans revealed those who had the patient-centered interview showed less activity in the anterior insula . . . and also self-reported less pain . . . a good first step that puts some scientific weight behind the case for empathizing with patients, getting to know them and building trust."

Clip art: Clker
Several earlier posts have addressed the critical importance of trust in getting learners to (quite literally) step out of their comfort zones in mirroring the pedagogical movement patterns or gestures of kinaesthetic learning, in general, and haptic-integration in particular. Empathy is perhaps the key to achieving and maintaining that working relationship in the classroom. And one of the most important ways that empathy is signaled, of course, is with . . . synchronized body movement and its impact on brain waves. 

A number of studies have also investigated the link between empathy and learning pronunciation, for example, a 1980 study by Guiora, Acton, Erard, and Strickland, that found that a valium-induced empathy-like state in native English speaking undergraduates resulted in significantly enhanced ability to repeat impossibly difficult phrases in Thai. (Trust me on that one!)


Monday, December 3, 2012

Minimal, minimal pair work!

Clip art: Clker
In this month's TESOL Connections is a neat piece by Donna Brinton entitled: Pronunciation: Teaching a segmental contrast. (If you are not a TESOL member you may not be able to access it . . . so take my word for what I am about to say about it!) What caught my eye was this: "Other techniques commonly used are “gadgets” (such as drinking straws or popsicle sticks) so that learners can more accurately feel the position of their tongue or kinesthetic techniques such as asking learners to place their hand palm down underneath their chin and practice the given vowel contrast (such as end vs. and), concentrating on the difference in the position of the jaw (i.e., higher for end and lower for and) . . . " 

Gadgets. Kinaesthetic techniques are assigned to the category of "gadgets" by most methodologists, not something that is integral to the process. And as "simple" as Brinton makes it sound, far too often the PROBLEM is FIRST getting the correct articulatory setting and then anchoring it. Depending on the L1 of the learner and a few dozen other variables, imitating and integrating the right contrastive vowel quality settings may not be a big deal. In that case, the 5-step process, set out over the course of a few weeks is near ideal. For others, greater "interdiction" is required--and that takes either training or outsourcing.

Any thoughts on where to get minimal prePAIRation in the messy "physical" side of the work, if you don't have the time or resources to get yourself trained, to become sufficiently "cEHIEPable" to fix and anchor articulatory problems on the fly?

Sunday, December 2, 2012

A touch of gender in (haptically anchoring) English vowels

Image: Wikipedia
Image: Wikipedia
Grammatical gender is a prominent feature in many romance and germanic languages. In some cases there is a correlation between it and masculine or feminine attributes but it is as often as not just random. 2011 research by Slepian, Weisbuch of the University of Denver, Rule of the University of Toronto, and Ambady of Tufts University, summarized by Science Daily, ends in this "touching" conclusion: "We were really surprised . . . that the feeling of handling something hard or soft can influence how you visually perceive a face . . . that knowledge about social categories, such as gender, is like other kinds of knowledge -- it's partly carried in the body."

Ya think? Subjects basically held something tough or "tender" as they were asked to make judgements on the gender of people in pictures, and, not surprisingly the texture of the object affected their "gender detector," or something to that effect. As noted in earlier posts, in the EHIEP system, each vowel type as it is articulated is designed to be accompanied by a distinct sign-like touch that has very distinct texture. (See also earlier posts on the neurophysiological correlates of textural metaphors.) Turns out we may have unwittingly created masculine and feminine vowel anchors! No wonder they work so well!
  • When marking/anchoring stress in words or phrases, (a) use rough GUY-touch for lax vowels in isolation or before voiceless consonants, (b) use tender/static GIRL-touch for tense vowels in isolation or in secondary stressed positon in words or phrases, (c) use gouging/dynamic GUY-touch for diphthongs and tense vowels + off-glide, or (d) use tender/dynamic GIRL-touch for lax vowels in stressed syllables before voiced consonants.  
  • When marking/anchoring  the prominent syllable in a tone or intonation group, use smooth/gentle/flowing GIRL-touch!
  • When marking/anchoring syllables in groups, use gentle tapping GIRL-touch!


    Saturday, December 1, 2012

    The body language of pronunciation teaching: Karaoke Affect

    Clip art: Clker
    One of the potential "turn offs" for some instructors and students in buying into the gestural and somatic basis of pronunciation work is . . . how "goofy" it looks (with apologies to Goofy, of course.) And some of it does, unquestionably. If you need to get to "goofy," you have to ramp up use of wilder gesticulation gradually, what we call "Karaoke Affect." As long as you establish the context carefully and set up good conceptual partitions, most students will come along with you . . . to goofy and beyond.

    But to one who is not in the typical pronunciation teaching box, or just passing by, who has no clue what the class is about, what do the typical, gestural classroom techniques communicate: (a) clapping hands, (b) snapping fingers, (c) stretching rubber bands, (c) humming with a kazoo, (d) thumping on the desk, (e) stamping feet, (f) waving hands in the air to imitate intonation, (g) tracing lines on worksheets with fingers, (h) stepping up and down with sentence stress, (i) popping candy in the mouth on certain vowels, (j) throwing bean bags on stressed words, and (k) let alone the dozens of mouth machinations done for teaching specific vowel and consonant articulation?

    According to recent research by Aviezer of Hebrew University, Trope of New York University and Todorov of Princeton University, summarized by Science Daily, it is the body that accurately communicates feelings (at least), not the face and mouth. In the study, subjects were much better at determining emotional state by focusing on movement and gesture, not looking from the neck up.

    Situating and contextualizing those "bizarre" behaviours and what they communicate requires a coherent system to use them in. As we have seen in research in dozens of blog posts, it can go either way. (The EHIEP "way" is a good start, of course!) So, climb in your Karaoke Affect Box, affect your best your Eliza Doolittle, and  Show me

    Friday, November 30, 2012

    The "music" of pronunciation teaching: Just "duet!"

    Clip art: Clker
    Clip art: Clker
    It would be difficult to find anyone who does not support the use of music in language teaching, for any number of reasons. For the most part the rationales are common sense and intuitive--and backed by generations of validating classroom and extra-classroom experience. But here is a study by Lindenberger and colleagues at the Max Planck Institute (reported by Science Daily) that goes the other direction, examining the synchronous brain activity that is evident in "making music together," in this case two guitarists playing a duet, connected up to fMRI technology (including the usual bathing caps with dozens of wires attached!)

    There are similar studies of "duets" of conversationalists, lovers, mothers and infants and others, which show coordination or mirroring of minds and brains. Likewise, studies of empathy show analogous "sync-ing" in between participants. (In EHIEP work, there is extensive mirroring, complementary background music and use of music in supporting rhythmic practice.) Can you imagine a more effective occasion for anchoring of new or changed pronunciation than when instructor and learner are locked in (neurophysiologically and pedagogically appropriate) synchronized dance from across the room--making music together? That is music to more than the ears--and not a bad place to begin in understanding when instruction enables uptake and when it doesn't. Take note . . . and your mp3 player. 

    Thursday, November 29, 2012

    Let's (not) get (too) physical in pronunciation teaching!

    Clip
    With apologies to Olivia Newton-John, I still get that response occasionally in workshops and in reaction to blogposts. The focus of HICPR is not on developing a "physical" method or approach to pronunciation teaching but rather on ensuring that the body is given an appropriate place in the process, especially with the development of technology and haptic-grounded virtual reality. Those who are not by nature "connected" to their bodies, either they (a) don't listen to it much at all or (b) are overly sensitive to how it feels and looks, may not be at ease in the "haptic" lesson or integrating movement, touch and general body awareness in their work.

    art: 
    Have done a couple of earlier posts related to mindfulness theory, meditation practices and body representation. A fascinating study by Dykstra and Barelds of Groningen University, entitled, "Examining a model of dispositional mindfulness, body comparison, and body satisfaction," suggests something of a different approach to better orienting learners and instructors to haptic engagement: dispositional mindfulness training. The research demonstrated " . . . a positive relation between mindfulness and body satisfaction: as individuals are more mindful, they are more satisfied with their body . . . consistent with the fact that non-judgment, a central component of mindfulness, is also highly relevant to the construct of body image . . . "
    by Clker

    The key element there is "dispositional," part of a general, eminently trainable, response to internal and external pressures and stressors, characterizing one's disposition or style of responding (varying from extremely reactive to non-reactive, for example). Combine that with mindfulness, a general, relatively nonjudgmental  awareness or comprehension of what is going on, and you have what appears to be a near optional mindset for learning pronunciation for any . . . body. Dispositional (haptic-integrated) mindful pronunciation learning: DHIMPL!
    .com

    Some of that is embodied in EHIEP today, the felt sense of confident, comfortable, (dimpled?) managed pedagogical movement, but it should also be the model underlying language instruction in general. The secret to getting there is your point of departure, Lessac's dictum: Train the body first!

    Wednesday, November 28, 2012

    Aiming at good pronunciation: on the Q(E)T

    Clip art: Clker
    Clip art: Clker
    Always looking for ways to enhance haptic anchoring, I came across some interesting new research  by Wood and Wilson of Exeter University using Quiet Eye Training (QET), a well-established technique for helping one (especially professional athletes under pressure) aim at (or focus attention on) a target.The training assists the shooter in putting distraction out of mind. (Some studies report even more generalized impact on everyday cognitive functioning and sense of control as well.)

    This is potentially a good fit with other attention management strategies in the EHIEP approach. Early on in the development of the system we experimented with some eye-tracking techniques similar to those used in OEI but discovered that they were a little too "high octane" for general pronunciation work. (In working with "fossilized" individuals I still use some of those regularly, however.) Since QET does not require instructor presence when the shot is taken, it may be possible to use it in some form. Will figure out how to adapt QET training, how to better enable learners to anchor what they do on the q.t. and get back to you. 

    Monday, November 26, 2012

    Physical vs social domains in pronunciation work

    Ever wonder why students may not be able to use a new piece of pronunciation in pair work or controlled conversation or on their way our the door? Forthcoming research (already!) published in NeuroImage by Jacka, Dawsona, Beganya, Leckiea, Barrya, Cicciab and Snyderc, fMRI reveals reciprocal inhibition between social and physical cognitive domains (in the brain) suggests part of the answer: "Regardless of presentation modality, we observed clear evidence of reciprocal suppression: social tasks deactivated regions associated with mechanical reasoning and mechanical tasks deactivated regions associated with social reasoning."
    clip art: Ckler
    The implications of that for integration of pronunciation work, both in the lesson and in the brain of the learner, are worth an "uninhibited" reexamination. For one, perhaps insight, explanation, meaningful conversations, "lite drills" and metacognitive encouragement are not enough for efficient "uptake" to occur. Likewise, decontextualized "body drills" that focus primarily on the mechanics of articulation are not going to automatically bridge the "domain gap" either--in the classroom or on the street. Optimal learning in both domains must go on either simultaneously or in some kind of intricate dance that achieves both outcomes. Haptic integration is one answer to that, where the "channels" of communication and change are not quite in as direct competition. The only problem is often just overcoming the inhibitions of the "haptically challenged."

    Sunday, November 25, 2012

    Play it again, HIRREM! (A musical tone approach to balanced pronunciation learning?)

    Clip art: Clker
    With apologies to Humprey Bogart, one of the basic "learning" assumptions in most training systems is that some degree of balance between relevant areas of the brain, whether left~right, top~bottom or front~back (or all of those) is optimal. How that is to be achieved is the question, of course. As blogged earlier on several occasions, brain research (e.g., as in neurotherapy) is now beginning to offer alternatives or at least compliments to cognitive and physical exercises or disciplines: brain frequency "adjustment."

    In a new study by Tegler and colleagues at Wake Forest University (summarized by Science Daily), musical tones were mirrored back to the brains of subjects to achieve a more balanced overall brain frequency profile--which appeared to successfully lessen insomnia, at least for a month or so. Tegler does note that " . . . the changes observed with HIRREM, could be due to a placebo effect. In addition, because HIRREM therapy involves social interaction and relaxation, there may be other non-specific mechanisms for improvement, in addition to the tonal mirroring."

    Now granted, this specific technology may not directly impact a learner's ability to learn new or repaired sounds--or even "HIRREM" better, but it is clearly on the right track. (Nothing to lose sleep over if you can't spring for the 30k to get you a " . . . high-resolution, relational, resonance-based, electroencephalic mirroring or, as it's commercially known, Brainwave Optimization™ . . . " set up!) But multiple-modality and balanced "all-brain" engagement is the key to pronunciation change. It's coming. Keep in touch. 

    Saturday, November 24, 2012

    An alternative (hand) approach to (haptic) pronunciation teaching!

    Clip art: Clker
    Have done a few posts on "exercise persistence" research, trying to figure out how to help learners practice consistently. Among the variables will always be something like "self-control or self-discipline," along with other socially-oriented factors. One of the reasons I have found such studies of interest, of course, is the connection to movement and physical exercise in haptic pronunciation work.

    In a new review article by Denson, DeWall and Finkel (summarized, of course, by Science Daily!) is reference to a study by Denson in which he (simply) had subjects use their non-dominant hand (in this case left hands) for two weeks for various "normal" functions, as all were right-handers, to see whether that might enhance self-control and reduce aggression. It worked! Denson doesn't say exactly why . . . but we can maybe help him.
    Clip art: Clker

    In the "brain business," such organizations as Luminosity and Brain Gym and many others, use a wide range of "out of the box" but proven, physical, bi-lateral hand and arm movements to manage thought in many forms, from emotion to brainstorming to creativity. They often report or claim the same general effect.

    In EHEIP work, for rhythm, intonation, fluency and (some types of integration) the left hand moves across the visual field to the right hand. The left hand, in effect, "conducts" intonation, pitch and pace functions during correction and practice--and regulates overall speaking performance. The right hand (on the other hand) serves as the anchor for word, phrase, sentence and discourse focus. Denson's research is fascinating. Clearly some of the effectiveness of the EHIEP system as well may be due (simply) to increased activation and engagement of the left hand and arm. We'll take it, whatever the explanation.

    Will see if I can work out a protocol to moderate the sometimes mildly (or wildly) "aggressive" reactions to haptic techniques of the "hyper-cognitive" or "hapticaphobic"--before they walk out of the next workshop, something a bit out of the (fuzzy-haptic) box . . . (See previous post on haptic "fuzziness.") 

    Friday, November 23, 2012

    Do-it-Yourself! haptic-integrated pronunciation teaching


    Clip art: Clker
    Clip art: Clker
    Haptic work is, by definition . . . touching! As explored in several previous posts, there are a wide range of conditions under which haptic anchoring of movement, visual images and sound may or may not be effective in instruction. (According to new research, by Patterson and colleagues at the University of Liecester, summarized by Science Daily, there may even be a bias in favor of those of us over the age of 65 in responding to the typical "fuzziness" of haptic cinema!)

    One of the most striking discoveries in our work has been the realization that some of the EHIEP pedagogical movement patterns can be taught well face-to-face but others may be better introduced by a video model, especially vowels, vowel "compaction" and intonation. That video model can be the instructor, him or herself, or someone else--such as in the EHIEP system of videos and student workbooks that I am developing, of course! Why that should be is complex but understood (See this blogpost by Grant on http://filmanalytical.blogspot.ca/)

    In essence, it is emotionally and interpersonally very powerful. In some contexts, either because of the personality of the instructor or the class, video is a better option for perhaps half of the PMPs. One reason for that is the impact of eye contact on mirroring in a classroom setting. In essence, vivid "moving" visual feedback from students, whether negative  or positive can dramatically undermine an instructor's ability to teach PMPs. Once they are introduced, however, classroom use of a PMP to anchor vowels, stress, rhythm, intonation or pitch/volume/pace seems to be less susceptible to disruption.

    Bottom line: It takes training to do pronunciation work of any kind effectively or efficiently. Either you get trained or have somebody else do it for you, either in your program or through technology. Haptic video and its post-production technology is very promising. I am tempted to use a term like "CAPT Video," Computer-Assisted-Pronunciation-Teaching with Video, were there not already a near-relevant song by that name .  .  .  

    Wednesday, November 21, 2012

    Pronunciation & body & media fit

    Clip art: Clker
    If you have been reading the blog occasionally, you are aware of the basis of the EHIEP model: (a) initial pronunciation teaching and (b) practice outsourced to video with subsequent (c) integrated use in the classroom, (d) strong haptic engagement (movement and touch) and (e) somatic or body awareness and training. For the latter piece, body monitoring, maybe what we need is something like the "BodyMedia FIT" system. I love the company's come on line: "Your body talks. We listen." Wish I had the spare change to buy one of those arm bands, just for fun. The research on effectiveness of the technology, using web-based systems,  is interesting. "Body training," in general, is biofeedback of one kind or another. This type of technology could easily be adapted to provide constant feedback on the quality of movement, relaxation, energy expenditure and body resonance. For much less money and hassle--with a modicum of self-discipline and persistence, learners can experience the same kind of integrated experience of speaking and pronunciation change with us. The future, however, is with technology such as this linked to CAPT (see previous post.) and haptic cinema. But if you have difficulty consistently managing your "current classroom body image" and its caloric correlates, consider "arming yourself" with such a band. 

    Monday, November 19, 2012

    Disembodied pronunciation: computer generated, animated images of learners' inappropriate articulation


    Clip art: Clker
    Clip art: Clker
    May start a new series of blogposts focusing on amazing-looking pronunciation techniques that, from a HICPR perspective, are so thoroughly disembodied or "dys-haptic" (generally depending heavily on only visual modalities, lacking a somatic, physical basis) such that chances of them working are probably not all that good, at best, such as this one:
    "Improvement of animated articulatory gesture extracted from speech for pronunciation training," by Manosavan, Katsurada, Hayashi, Zhu, Nitta of Toyohashi University, a paper from the 2012 IEEE Convention--available for 31 bucks to nonmembers. (Have not read the full paper, just the abstract. My general policy is to pay for no research papers that cost more than 6 Starbucks Vente Carmel Frappuccinoes.) Computer-assisted Pronunciation Training (CAPT) is probably the future of the field, but a system that creates a moving cartoon-like representation of what a learner is doing wrong and then juxtaposes that with an animated image of how to do it right cannot possibly work effectively or efficiently-expect perhaps for those who are CAPT designers and gamers. (What do they need appropriate pronunciation for anyway?) 

    However, if that video image were to be merged with "haptic cinema" technique and technology, (linked is a very "a-peeling" example, in fact!) they may still be on to something. 

    Sunday, November 18, 2012

    Got an itch to teach pronunciation?

    Clip art: Clker
    Clip art: Clker
    This is fun. Several of the pedagogical movement patterns in the EHIEP system involve either scratching (or brushing) one hand with the fingernails (or just fingers) of the other hand, as the sound is articulated. Have known for some time that when it is demonstrated by the instructor (on video) and learners are asked to mirror that movement, that pattern catches on very quickly. Now we know why. Research by Holle, Warne, Seth, Critchley and Ward of the Universities of Sussex and Hall (abstract on PNAS website) even suggests which personality trait might respond more readily to seeing someone else scratch an itch: neuroticism (tendency to respond disproportionately to negative emotions.)

    Research on mirror neurons alone demonstrates just how powerful the impact of witnessing movement or gesture by another person can be. In this study the extension to tactile/touch is important for understanding just how haptic-integrated pronunciation instruction works, especially the potential effectiveness of pronunciation-based haptic anchors (gesture which includes hands touching as a stressed syllable of a word is spoken.)

    Not sure exactly how neuroticism figures in, but in some of the protocols (sets of training techniques) we do use contrasting sets of positive and negative terms, anchored on opposite sides of the body or visual field, e.g. tough/nice, tricky/easy, puzzling/beautiful, complicated/fascinating. The "negatives" may actually resonate more with some! So don't be too concerned if you get an itch to get "tough" on your potentially neurotic students or colleagues who are critical of our work, who see it as too puzzling, tricky or complicated . . .