Wednesday, March 14, 2018

Teaching EnglishL2 advanced conversation (with hand2hand prosodic and paralinguistic "comeback")
We'll be doing a new workshop: "Pronunciation across the 'spaces' between sentences and speakers."  At the 2018 BCTEAL Conference here in Vancouver in May. Here is the summary:

This workshop introduces a set of haptic (movement + touch) based techniques for working with English discourse-level prosodic and paralanguistic bridges between participants in conversation, including, key, volume and pace. Some familiarity with teaching of L2 prosodics (basically: rhythm, stress, juncture and intonation) is recommended.

The framework is based to some extent on Prosodic Orientation in English Conversation, by Szczepek-Reed, and new features of v5.0 of the haptic pronunciation teaching system: Essential Haptic-interated English Pronunciation (EHIEP), available by August, 2018. The innovation is the use of several pedagogical movement patterns (PMPs) that help learners attend to the matches and mismatches of prosodics and paralanguage between participants in conversation that create and maintain coherence and . . . empathy across conversational turns.

For a quick glimpse of just the basic prosodic PMPs, see the demo of the AH-EPS ExIT (Expressiveness) from EHIEP v2.0.

The session is only 45 minutes long, so it will just be an experiential overview or tour of the set of speech-synchronized-gesture-and-touch techniques. The video, along with handouts, will be linked here in late May.

Join us!

Saturday, March 3, 2018

Attention! The "Hocus focus" effect on learning and teaching
"We live in such an age of chatter and distraction. Everything is a challenge for the ears and eyes" (Rebecca Pidgeon)  "The internet is a big distraction." (Ray Bradbury)

There is a great deal of research examining the apparent advantage that children appear to have in language learning, especially pronunciation. Gradually, there is also accumulating a broad research base on another continuum, that of young vs "mature" adult learning in the digital age. Intriguing piece by Nir Eyal posted at one of my favorite, occasional light reads,, entitled, Your ability to focus has probably peaked: heres how to stay sharp.

The piece is based in part on The Distracted Mind: Ancient Brains in a High-Tech World by Gazzaley and Rosen. One of the striking findings of the research reported, other than the fact that your ability to focus intently apparently peaks at age 20, is that there is actually no significant difference in focusing ability between those in their 20s and someone in their 70s. What is dramatically different, however, is one's susceptibility to distraction. Just like the magician's "hocus pocus" use of distraction, in a very real sense, it is our ability to not be distracted that may be key, not our ability to simply focus our attention however intently on an object or idea. It is a distinction that does make a difference.

The two processes, focusing and avoiding distraction, derive from different areas of the brain. As we age, or in some neurological conditions emerging from other causes such as injury or trauma, it may get more and more difficult to keep out of consciousness information or perception being generated from intruding on our thinking. Our executive functions become less effectual. Sound familiar? 

In examining the effect of distraction on subjects of all ages on focusing to remember targeted material, being confronted with a visual field filled with various photos of people or familiar objects, for example, was significantly more distracting than closing one's eyes (which was only slightly better, in fact), as opposed to being faced with a plain visual field of one color, with no pattern, which was the most enabling visual field for the focus tasking. In other words, clutter trumps focus, especially with time.  Older subjects were significantly more distracted in all three conditions, but still also to better focus in the latter, a less cluttered visual field.

Some interesting implications for teaching there--and validation of our intuitions as well, of course. Probably the most important is that explicit management of not just attention of the learner, but sources of distraction, not just in class but outside as well, may reap substantial benefits. This new research helps to further justify broader interventions and more attention on the part of instructors to a whole range of learning condition issues. In principle, anything that distracts can be credibly "adjusted", especially where fine distinctions or complex concepts are the "focus" of instruction.

In haptic pronunciation work, where the felt sense of what body is doing should almost always be a prominent part of learner's awareness, the assumption has been that one function of that process is to better manage attention and visual distraction. If you know of a study that empirically establishes or examines the effect of gesture on attention during vocal production, please let us know!

The question: Is the choice of paying attention or not a basic "student right?" If it isn't, how can you further enhance your effectiveness by better "stick handling" all sources of distraction in your work . . . including your desktop(s) and the space around you at this moment?

For a potentially productive distraction this week, take a fresh look at what your class feels like and "looks like" . . . without the usual "Hocus focus!"

Friday, February 23, 2018

How watching curling can make you a better teacher!
Tigger alert: This post contains application of insights from curling and business sales to teaching, certainly nothing to be Pooh-Poohed. 

The piece linked above by Dooley on, How watching curling helps you sell better, explores the potential effects of ongoing attention to sales, brushing away obstacles, influencing the course of "the rock." Most importantly, however, it emphasizes the idea of constantly examining and influencing the behavior of your customers (your students.)

It sounds at first like that analogy flies in the face of empowering the learner and encouraging learner autonomy, let alone questionable manipulation . . .  Not quite. It speaks more to instructor responsibility for doing as much as possible to facilitate the process, but especially the whole range of "influencing" behaviors that neuroscience is "rediscovering" for us, many times less explicit and only marginally out of learner awareness, such as room milieu, pacing, voice characteristics, timing and even . . . homework or engagement with the language outside of class.

Marketers, wedded to the new neuroscience (or pseudo-science) consultants, are way out ahead of us in some respects, far behind in others. What are some major "rocks" that you might better outmaneuver with astute, consistent micro-moves, staying ahead, brushing aside obstacles? One book you might consider "curling  up with, with a grain of salt" is Dooley's Brainfluence: 100 Ways to Persuade and Convince Consumers with Neuromarketing.

Wednesday, February 14, 2018

Ferreting out good pronunciation: 25% in the eye of the hearer!
Something of an "eye opening" study, Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding, by Town, Wood, Jones, Maddox, Lee, and Bizley of University College London, published on Neuron. One of the implications of the study:

"Looking at someone when they're speaking doesn't just help us hear because of our ability to recognise lip movements – we've shown it's beneficial at a lower level than that, as the timing of the movements aligned with the timing of the sounds tells our auditory neurons which sounds to represent more strongly. If you're trying to pick someone's voice out of background noise, that could be really helpful," They go on to suggest that someone with hearing difficulties have their eyes tested as well.

I say "implications" because the research was actually carried out on ferrets, examining how sound and light combinations were processed by their auditory neurons in their auditory cortices. (We'll take their word that the ferret's wiring and ours are sufficiently alike there. . . )

The implications for language and pronunciation teaching are interesting, namely: strategic visual attention to the source of speech models and participants in conversation may make a significant impact on comprehension and learning how to articulate select sounds. In general, materials designers get it when it comes to creating vivid, even moving models. What is missing, however, is consistent, systematic, intentional manipulation of eye movement and fixation in the process. (There have been methods that dabbled in attempts at such explicit control, e.g., "Suggestopedia"?)

In haptic pronunciation teaching we generally control visual attention with gesture-synchronized speech which highlights stressed elements in speech, and something analogous with individual vowels and consonants. How much are your students really paying attention, visually? How much of your listening comprehension instruction is audio only, as opposed to video sourced? See what I mean?

Look. You can do better pronunciation work.

Citation: (Open access)

Thursday, February 8, 2018

The feeling of how it happens: haptic cognition in (pronunciation) teaching

Am often asked the question as to how "haptic" (movement+touch) can enhance teaching, especially pronunciation teaching. A neat new study by Shaikh, Magana, Neri, Escobar-Castillejos, Noguez and Benes, Undergraduate students’ conceptual interpretation and perceptions of haptic-enabled learning experiences, is "instructive". Specifically, the study,

 " . . . explores the potential of haptic technologies in supporting conceptual understanding of difficult concepts in science, specifically concepts related to electricity and magnetism."

Now aside from the fact that work with (haptic) pronunciation teaching should certainly feel at times both "electric and magnetic", the research illustrates how haptic technology, in this case a joy-stick-like device, can help students more effectively figure out some basic, fundamental concepts. In essence, the students were able to "feel" the effect of current changes and magnetic attraction as various forces and variables were explored. The response from students to the experience was very positive, especially in terms of affirmation of understanding the key ideas involved.

The real importance of the study, however, is that haptic engagement is not seen as simply "reinforcing" something taught visually or auditorily. It is basic to the pedagogical process. In other words, experiencing the effect of electricity and magnetic attraction as the concepts are presented results in (what appears to be) a more effective and efficient lesson. It is experiential learning at its best, where what is acquired is more fully integrated cognition, where the physical "input" is critical to understanding, or may, in fact, precede more "frontal" conscious analysis and access to memory. (Reminiscent, of course, of Damasio's 2000 book: The feeling of how it happens: Body and emotion in the making of consciousness. Required reading!)

An analogous process is evident in haptic pronunciation instruction or any approach that systematically uses gesture or rich body awareness. The key is for that awareness, of movement and vibration or resonance, to at critical junctures PRECEDE explanation, modeling, reflection and analysis, not simply to accompany speech or visual display. (Train the body first! - Lessac)

We are doing a workshop in May that will deal with discourse intonation and orientation (the phonological processes that span sentence and conversational turn boundaries). We'll be training participants in a number of pedagogical gestures that later will accompany the speech in that bridging. To see what some of those used for expressiveness look (and feel) like, go here!


Monday, January 29, 2018

Anxious about your (pronunciation) teaching? You’d better act fast!

Probably the most consistent finding in research on pronunciation teaching from instructors and student alike is that it can be . . . stressful and anxiety producing. And compounding that is often the additional pressure of providing feedback or correction. A common response, of course, is just to not bother with pronunciation at all. One coping strategy often recommended is to provide "post hoc" feedback, that is after the leaner or activity is finished, where you refer back to errors, in as low key and supportive a manner as possible. (As explored in previous posts, you might also toss in some deep nasal breathing, mindfulness or holding of hot tea/coffee cups at the same time, of course.) Check that . . . 

A new study by Zhang, Lei , Yin, Li and Li (2018) Slow Is Also Fast: Feedback Delay Affects Anxiety and Outcome Evaluation, published in Frontiers in Human Neuroscience, adds an interesting perspective to the problem. What they found, in essence, was that: 
  • Learners who tended toward high anxiety responded better to immediate positive feedback than such feedback postposed, or provided later. The same type of learners also perceived overall outcomes of the training as lower, were the feedback to be provided later. 
  • Learners who tended toward low anxiety responded equally well to immediate or delayed feedback and judged the training as effective in either condition. There was also a trend toward making better use of feedback as well. 
  •  Just why that might be the case is not explored in depth but it obviously has something to do with being able to hold the experience in long term memory more effectively, or with less clutter or emotional interference.
I'm good!
So, if that is more generally the case, it presents us with a real a conundrum on how to consistently provide feedback in pronunciation teaching, or any teaching for that matter. Few would say that generating anxiousness, other than in the short term as in getting "up" for tests or so-called healthy motivation  in competition, is good for learning. If pronunciation work itself makes everybody more anxious, then it would seem that we should at least focus more on more immediate feedback and correction or positive reinforcement. Waiting longer apparently just further handicaps those more prone to anxiety. How about doing nothing? 

This certainly makes sense of the seemingly contradictory results of research in pronunciation teaching showing instructors biased toward less feedback and correction but students consistently wanting more

How do you provide relatively anxiety-free, immediate feedback in your class, especially if your preference is for delayed feedback? Do you? In haptic work, the regular warm up preceding pronunciation work is seen as critical to that process. (but we use a great deal of immediate, ongoing feedback.) Other instructors manage to set up a more general nonthreatening, supportive, open and accommodating classroom milieu and "safe spaces". Others seem to effectively use the anonymity of whole class responses and predictable drill-like activities, especially in oral output practice.

Anxiety management or avoidance. Would, of course, appreciate your thoughts and best practice 0n this . . as soon as possible!

Citation: Zhang X, Lei Y, Yin H, Li P and Li H (2018) Slow Is Also Fast: Feedback Delay Affects Anxiety and Outcome Evaluation. Front. Hum. Neurosci. 12:20. doi: 10.3389/fnhum.2018.00020

Sunday, January 21, 2018

An "after thought" no longer: Embodied cognition, pronunciation instruction and warm ups!

If your pronunciation work is less than memorable or engaging, you may be missing a simple but critical step: warming up the body . . . and mind (cf., recent posts on using Mindfulness or Lessac training for that purpose.) Here's why.

A recent, readable piece by Cardona, Embodied Cognition: A Challenging Road for Clinical Neuropsychology presents a framework that parallels most contemporary models of pronunciation instruction. (Recall the name of this blog: Haptic-integrated CLINICAL pronunciation research!) The basic problem is not that the body is not adequately included or applied in therapy or instruction, but that it generally "comes last" in the process, often just to reinforce what has been "taught", at best.

That linear model has a long history, according to Cardona, in part due to " the convergence of the localizationist approaches and computational models of information processing adopted by CN (clinical neuropsychology)".  His "good news" is that research in neuroscience and embodied cognition has (finally) begun to establish more of the role of the body, relative to both thought and perception, one of parity, contributing bidirectionally to the process--as opposed to contemporary "disembodied and localization connectivist" approaches. (He might as well be talking about pronunciation teaching there.)

"Recently, embodied cognition (EC) has put the sensory-motor system on the stage of human cognitive neuroscience . . .  EC proposes that the brain systems underlying perception and action are integrated with cognition in bidirectional pathways  . . , highlighting their connection with bodily  . . . and emotional  . . .  experiences, leading to research programs aimed at demonstrating the influence of action on perception . . . and high-level cognition  . . . "  (Cardona, 2017) (The ellipted sections represent research citations in the original.) 

Pick up almost any pronunciation teaching text today and observe the order in which pronunciation features are presented and  taught. I did that recently, reviewing over two dozen recent student and methods books. Almost without exception the order was something like the following:
  • perception (by focused listening) 
  • explanation/cognition (by instructor), 
  • possible mechanical adjustment(s), which may or may not include engagement of more of body than just the head (i.e., gesture), and then 
  • oral practice of various kinds, including some communicative pair or group work 
There were occasional recommendations regarding warm ups in the instructor's notes but nothing systematic or specific as to what that should entail or how to do it. 

The relationship between perception, cognition and body action there is very much like what Cardona describes as endemic to clinical neuropsychology: the body is not adequately understood as influencing how the sound is perceived or its essential identity as a physical experience. Instead, the targeted sound or phoneme is encountered first as a linguistic construct or constructed visual image.

No wonder an intervention in class may not be efficient or remembered . . .
So, short of becoming a "haptician" (one who teaches pronunciation beginning with the body movement and awareness)--an excellent idea, by the way, how do you at least partially overcome the disembodiment and localization that can seriously undermine your work? A good first step is to just consistently do a good warm up before attending to pronunciation, a basic principle of haptic work, such as this one which activates a wide range of muscles, sound mechanisms and mind.

One of the best ways to understand just how warm ups work in embodying the learning process is this IADMS piece on warming up before dance practice. No matter how you teach pronunciation, just kicking off your sessions with a well-designed warmup, engaging the body and mind first, will always produce better results. It may take three or four times to get it established with your students, but the long term impact will be striking. Guaranteed . . . or your memory back!

Thursday, January 4, 2018

Touching pronunciation teaching: a haptic Pas de trois
For you ballet buffs this should "touch home" . . . The traditional "Pas de trois" in ballet typically involves 3 dancers who move through 5 phases: Introduction, 3 variations, each done by at least one dancer, and then a coda of some kind with all dancing.

A recent article by Lamothe in the UK Guardian, Let's touch: why physical connection between human beings matters, reminded us of some the earliest work we did in haptic pronunciation teaching that involved students working together in pairs, "conducted" by the instructor, in effect "touching" each other on focus words or stressed syllables in various ways, on various body parts.

In today's highly "touch sensitive" milieu, any kind of interpersonal touching is potentially problematic, especially "cross-gender" or "cross-power plane", but there still is an important place for it, as Lamothe argues persuasively. Maybe even in pronunciation teaching!

Here is one example from haptic pronunciation teaching. Everything in the method can be done using intra-personal and interpersonal touch, but this one is relatively easy to "see" without a video to demonstrate the interpersonal version of it:
  • Students stand face to face about a foot apart. Instructor demonstrates a word or phrase, tapping her right shoulder (with left hand) on stressed syllables and left elbow (with right hand) on unstressed syllables--the "Butterfly technique"
As teacher and students then repeat the word or phrase together,
  • One student will lightly tap the other on the outside of the her right shoulder on stressed syllables (using her left hand).
  • The other student will lightly tap the outside of the other student's left elbow on unstressed syllables (using her right hand). 
Note: Depending on the socio-cultural context, and depending on what the general attire of the class is, having all students use some kind of hand "disinfectant" may be in order! Likewise, pairing of students obviously requires knowing well both them individually and the interpersonal dynamics of the class. Consider competition among pairs or teams using the same technique. 

If you do have the class and context for it, try a bit of it, for instance on a few short idioms. It takes a little getting used to, but the impact of touch in this relatively simple exercise format--and the close paralinguistic "communication"-- can be very dramatic and . . . touching.

Keep in touch!

Saturday, December 23, 2017

Vive la efference! Better pronunciation using your Mind's Ear!

"Efference" . . . our favorite new term and technique: to imagine saying something before you actually say it out loud, creating an "efferent copy" that the brain then uses in efficiently recognizing what is heard or what is said.  Research by Whitford, Jack, Pearson, Griffiths, Luque, Harris, Spencer, and Pelley of University of New South Wales, Neurophysiological evidence of efference copies to inner speech, summarized by, explored the neurological underpinnings of efferent copies, having subjects imagine saying a word before it was heard (or said.)

The difference in the amount of processing required of subsequent occurrences following the efferent copies, as observed by fMRI-like technology, was striking. The idea is that this is one way the brain efficiently deals with speech recognition and variance. By (unconsciously) having "heard" the target or an idealized version of it just previously in the "mind's ear", so to speak, we have more processing  power available to work on other things with . . .

Inner speech has been studied and employed in the second language research and  practice extensively  (e.g., Shigematsu, 2010, dissertation: Second language inner voice and identity) and in different disciplines.  There is no published research on the direct application of efference in our field to date that I’m aware of.

The haptic application of that general idea is to “imagine” saying the word or phrase synchronized with a specifically designed pedagogical gesture before articulating it.  In some cases, especially where the learner is highly visual, that seems to be helpful, but we have done no systematic work on it.  The relationship with video modeling effectiveness may be very relevant as well. Here is a quick thought/talk problem for you to demonstrate how it works:

Imagine yourself speaking a pronunciation-problematic word in one of your other languages before trying to say it out loud. Do NOT subvocalize, move your mouth muscles. (Add a gesture for more punch!) How’d it work?

Imagine your pronunciation work getting better while you are at it!

Friday, December 15, 2017

Object fusion in (pronunciation) teaching for better uptake and recall!

Your students sometimes can't remember what you so ingeniously tried to teach them? New study by D’Angelo, Noly-Gandon, Kacollja, Barense, and Ryan at the Rotman Research Institute in Ontario, Breaking down unitization: Is the whole greater than the sum of its parts?” (reported by suggests an "ingenious" template for helping at least some things "click and stick" better. What you need for starters:
  • 2 objects (real or imagined) (to be fused together)
  • an action linking or involving them, which fuses them
  • a potentially tangible, desirable consequence of that fusion
The example from the research of the "fusing" protocol was to visualize sticking an umbrella in the key hole of your front door to remind yourself to take your umbrella so you won't get soaking wet on the way to work tomorrow. Subjects who used that protocol, rather than just motion or action/consequence, were better at recalling the future task. Full disclosure here: the subjects were adults, age 61 to 88. Being near dead center in the middle of that distribution, myself, it certainly caught my attention! I have been using that strategy for the last two weeks or so with amazing results . . . or at least memories!

So, how might that work in pronunciation teaching? Here's an example

Consonant: th - (voiceless)
Objects: upper teeth, lower teeth, tongue
Fusion: tongue tip positioned between teeth as air blows out (action)
Consequence - better pronunciation of the th sound

Haptic pronunciation adds to the con-fusion

Vowel (low, central 'a'), done haptically (gesture + touch)
Objects: hands touch at waist level, as vowel is articulated, with jaw and tongue lowered in mouth, with strong, focused awareness of vocal resonance in the larynx and bones of the face.
Fusion: tongue and hand movement, sound, vocal resonance and touch
Consequence: better pronunciation of the 'a' sound

Key concept: It is not much of a stretch to say that our sense of touch is really our "fusion" sense, in that it serves as a nexus-agent for the others  (Fredembach, et al, 2009; Legarde and Kelso 2006). Much like the created image of the umbrella in the key hole evokes a memorable "embodied" event, probably even engaged with our tactile processing center(s), the haptic pedagogical movement pattern (PMP) should work in similar manner, either in actual physical practice or visualized.

One very effective technique, in fact, is to have learners visualize the PMP (gesture+sound+touch) without activating the voice. (Actually, when you visualize a PMP it is virtually impossible to NOT experience it, centered in your larynx or voice box.)

If this is all difficult for you to visualize or remember, try first imagining yourself whacking your forehead with your iPhone and shouting "Eureka!"

Baycrest Center for Geriatric Care (2017, August 11). Imagining an Action-Consequence Relationship Can Boost Memory. NeuroscienceNew. Retrieved August 11, 2017 from an Action-Consequence Relationship Can Boost Memory/

Wednesday, December 6, 2017

OLOA! Pronunciation Teaching Lagniappe!
When the "oral reading baby" was for a time tossed out with the structuralist reading and pronunciation teaching "bath", a valuable resource was temporarily mislaid. New research by Forrin and MacLeod of Waterloo University confirms what common sense tells us: that reading a text aloud or even verbalizing something that you need to remember (get ready!) actually may help. Really? In that study the "production effect" was quite significant. From the Science Daily summary:

"The study tested four methods for learning written information, including reading silently, hearing someone else read, listening to a recording of oneself reading, and reading aloud in real time. Results from tests with 95 participants showed that the production effect of reading information aloud to yourself resulted in the best remembering . . . And we know that regular exercise and movement are also strong building blocks for a good memory."

There have been any number of blogposts here advocating the use of oral reading in pronunciation teaching, but this is one argument that I had not encountered or was not all that interested in, in part because I had an Aunt who read and thought aloud constantly and very "irritatingly"! (And who, it appears not incidentally, had a  phenomenal memory for detail.) You may well have an aunt or associate who uses the same often socially dysfunctional memory heuristic.

One often unrecognized source of lagniappe (bonus) from attention to pronunciation, especially in the form of oral reading in class or as personalized homework, is this production effect, which is the actual focus of the study: any number of actions or physical movement may contribute to memory for language material. The text being verbalized still has to be "meaningful" in some sense, according to the study. In haptic work we use the acronym OLOA (out loud oral anchoring), targeted elements of speech accompanied by gesture and touch. 

That can happen any time in instruction, of course, but the precise conditions for it being effective are interesting and worth exploring. One of the procedures I have frequently set up in teaching observations is analyzing the extent and quality of OLOA (In Samoan: one's labor, skill or possessions!) See if you can remember to use more of that intentionally next week in class and observe what happens. (If not, try a little OLOA on this blogpost!)

University of Waterloo. (2017, December 1). Reading information aloud to yourself improves memory of materials. ScienceDaily. Retrieved December 6, 2017 from

Friday, November 24, 2017

NEW book chapter: A haptic pronunciation course for Freshman ESL college students

John Murphy's excellent new book, Teaching the Pronunciation of English: Focus on whole courses, has just been published! It is in many ways a celebration of pronunciation teaching.

Unapologetic haptic disclaimer: Of the 12 chapters, done by 17 contributors, our favorite (understandably) is by Nate Kielstra (with William Acton): "A haptic pronunciation course for Freshman ESL college students!"

From the forward:

"This volume fills a gap by introducing readers to whole courses focused on teaching the pronunciation of English as a second, foreign, or international language. This collection is designed to support more effective pronunciation teaching in as many language classrooms in as many different parts of the world as possible and to serve as a core text in an ESOL teacher development course dedicated to preparing pronunciation teachers."

It certainly delivers on that.

This volume is based on some of the same principles as Murphy and Byrd's earlier (2001) Understanding the courses we teach: Local perspectives on English language teaching. (Which we still use in our graduate program as a template for course development/description.)

One striking feature of the volume which we endorse enthusiastically is the idea that the courses described are more or less "stand alone". Talk about revolutionary (or Back-to-the-future-ish!) In other words, they are seen as effective even without much subsequent follow up by other classroom instructors teaching other skill areas--although all recommend (implicitly or explicitly) application of what is learned elsewhere in the curriculum.

Just imagine what it would be like should the inspired work of one of these "master classes" in your school go spilling off into the rest of program, either in just improved student pronunciation or instructors who take the process and run with it . . .

Murphy's first two chapters do a nice job of laying out the basics of what such courses need to cover or contain. Nate's chapter will give you a good picture of what a haptic-based course can look like.

Required reading!

Sunday, November 12, 2017

OMG! Hand2hand combat in the classroom: Facing problems in (pronunciation) teaching

OMG! (other-managed gesture) is fundamental to effective, systematic use of gesture in any classroom, especially pronunciation teaching. And exactly how you "face" that issue may be critical. Two fascinating new studies may suggest how.

As Sumo fan, haptician (practitioner of haptic pronunciation teaching) and veteran, one of my favorite metaphors for ongoing interaction in the (pronunciation) classroom has always been "H2H" (hand2hand combat.) Research by Mojtahedi, Fu and Santello, of Arizona State University - Tempe highlights an important variable in such engagement, evident in the title: On the role of physical interaction on performance of object manipulation by dyads.
Two of their key findings: (a) those subjects whose solo performance on a "physical" task was initially relatively low benefited from H2H training in dyads. Those of higher skill coming in, did not,  and (b) for those who do benefit, standing side-by-side, enabling dyadic work was superior to working F2F The "assistive" task was manipulating a horse-shoe like object in space, following varied instructions, either together or separately, best done by "coming alongside" the other person.

Granted, there is a difference between two people holding on to a piece of metal and guiding it around together, cooperatively--and an instructor being mirrored in gesturing by students across the room, synchronized with speaking words and phrases. Research in mirror neurons in the brain, however, would suggest that the difference is far less than one might think. In a very real sense, if you are paying close attention, watching something being done is experienced and managed in the brain very much like doing it yourself.

Now hold that thought for a minute while we go on to the next, related study, How spatial navigation correlates with language by Vukovich and Shtyrov at the HSE Centre for Cognition and Decision Making. In this study, subjects were first identified as to whether they were more "egocentric" or "allocentric" in their ability to grasp the perspective of another person, somewhat independent of their own position in space or time. (A concept somewhat analogous to field dependence/independence.)

What they discovered was that subjects who were (spatially) allocentric were also better at understanding oral instructions that required differing responses, depending on whether the subject pronoun of the description was 1st person singular or 3rd person. And more importantly the same areas of the brain were "lighting up", meaning processing the problem, for both language and spatial navigation.

Now juxtapose that with the finding of the other research which demonstrated that side-by-side (SxS) rather than face-to-face (F2F) "help" on the H2H task was more effective. F2F assistive engagement requires, in part the transposing of the movement of the person facing you to the opposite side of your body, an operation that we discovered a decade ago in haptic pronunciation teaching was exceedingly difficult for some instructors and students.

So what we have is a complex of the factors affecting success in gesture work: (probably) inherited ego or allo-centric tendencies which will impact how well one can accommodate a model moving in front of you, taking on the same handedness, as opposed to mirror image, and fact that some, less skillful learners are assisted more effectively by a partner SxS instead of standing F2F.

In other words, both studies seem to be getting at the same underlying variable or issue for us: why some gestural work works and some doesn't. This is potentially an important finding for haptic pronunciation teaching or just use of gesture in teaching in general, one that should impact our "standing" in the classroom, where we locate ourselves relative to learners when we manage or conduct gesture.

Sometimes facing your problem is not the answer!


Mojtahedi K, Fu Q and Santello M (2017) On the Role of Physical Interaction on Performance of Object Manipulation by Dyads. Front. Hum. Neurosci. 11:533. doi: 10.3389/fnhum.2017.00533

Nikola Vukovic et al, Cortical networks for reference-frame processing are shared by language and spatial navigation systems, NeuroImage (2017). DOI: 10.1016/j.neuroimage.2017.08.041

Friday, November 3, 2017

Operant conditioning rides again in language teaching!
 "The major difference between rats and people is that rats learn from experience." B.F Skinner

Quick quiz: What is "operant conditioning" and of what value is it to you in understanding language learning and teaching? If you can't answer either part of that question, unfortunately, you're not alone. Your formal training may well have lacked any thoughtful consideration of the concept of "operant conditioning". Following Chomsky's devastating attack on it and behaviorism and the ascendancy of cognitive/constructivist theory, it has in most learning frameworks appeared to have been at least dismissed, at best. Not really, according to an excellent new piece by Sturdy and Nicoladis, "How Much of Language Acquisition Does Operant Conditioning Explain?" -- it has just gone underground.

Their basic argument: "Researchers have ended up inventing learning mechanisms that, in actual practice, not only resemble but also in fact are examples of operant conditioning (OC) by any other name they select."

According to the meta-analysis, the most persuasive cases or contexts discussed are (a) socialization, (b) ritualization and (c) early child language learning. At least for one whose "basic training" in psychology as an undergraduate happened in 1962, it is a breath of fresh (familiar) air, not exactly vindication, but pretty close. It applies especially to the more embodied dimensions of pronunciation instruction, such as physical work on articulation and the felt sense of sound production in the vocal mechanism--and, of course, haptic engagement.

But it also is fundamental to understanding and using context-based feedback that is critical to socialization or social constructivism, including the role of ritual, pragmatics and long-term reinforcement mechanisms.

If you don't get a full-body, warm fuzzy from this piece, read it again holding a cup of hot tea or coffee. 

Required reading.

Sturdy CB and Nicoladis E (2017) How Much of Language Acquisition Does Operant Conditioning Explain?. Front. Psychol. 8:1918. doi: 10.3389/fpsyg.2017.01918

Wednesday, October 25, 2017

Enhanced courage and L2 pronunciation through acute alcohol consumption!
Some studies are enough to drive you to drink . . . and then miss numerous unaccounted for sources of variance.

You may have seen popular commentary on this recent study, "Dutch courage? Effects of acute alcohol consumption on self-ratings and observer ratings of foreign language skills" by Renner, Kersbergen, Field, and Werthmann of the University of Liverpool, published in the Journal of Psychopharmacology.  (ScienceDaily recast the title as: "Dutch courage: Alcohol improves foreign language skills."

This study had potential. What they found, basically, was that rater evaluation of pronunciation , as opposed to overall speech production, was better but  (interestingly!) that the subjects, themselves, did not perceive their L2 speech to be better. The subjects had been provided with a pint of something a bit earlier--not the raters or the experimenters, as far as we can tell.

Another relatively interesting feature was that the evaluations were done by blindfolded judges (which in itself, may have been problematic as noted in recent blogposts here) and the speech was evaluated during dialogue (interesting, again, but not sufficiently unpacked), not just with controlled repetition in a laboratory setting as had been the case in many past studies (e.g., summary of  Guiora et al, 1972 by Ellis).

Two terminological issues:
  • By "acute" the researchers indicate that it was a "low dose", one pint of 5% beer or equivalent. Now in the field of psychopharmacology that term, acute, may just mean something like "one time" or unusual. (I find conflicting opinions on that.) In normal North American English usage, of course, that usually is taken to mean something like: severe, critical, long term, etc. --or, of course, insightful, attention to detail, etc.  In Guiora, et al (1972) the alcohol dosage where the main effect was evident was at about one ounce of alcohol in a cocktail, roughly equivalent to that used in this study--but it was not described as "acute!"
  •  The subjects were termed "bilingual" (absent any empirical measurement of what that meant exactly) who had learned dutch "recently", at best a loose interpretation of what "bilingual" is generally taken to mean in the field today. That proficiency question may have had significant impact on the outcome of study, in fact.
So, why was the perceived improvement in subjects' speech just in their pronunciation, not other aspects of their speech or behavior? In Guiora et al (1972), for example, to explore that effect, subjects also had to perform a motor skill task, putting shaped blocks in holes of different shapes. What they found, not surprisingly, was at the 1-ounce level, both pronunciation improved and manual dexterity declined. The "physical" correlate was clear. One of the main criticisms of that alcohol study was that the alcohol effect may have been primarily "just" loosening up of the muscles and vocal mechanism, not some more higher level cognitive functioning. (Brown, 1989, also cited in Ellis, above).

Guiora et al (1972) were ultimately looking for the impact of that effect on "language ego", perception of one's identity in the L2. In a way they found that--a correlate. It is to some extent a matter of design directionality: loosening up the body does the same to the vocal mechanism. Will it be any surprise to find out that other non-pharmacological yet still "somatic" treatments, such as hypnosis, mindfulness or simply kinaesthetic engagement, such as gestural (or even haptic) work do something similar? Not at all.

In other words, the "pharmacogs" seem to have come up with a possible explanation for a well-appreciated phenomenon: after a shot, you'll be more courageous (or foolhardy) and your L2 pronunciation will be perceived as improved as long as your date is blindfolded or the room is very dark--but you won't know it, or care . . .

A little more interdisciplinary research and theory-integration, along with more in depth concern for the relevant "cocktail cognitions" of the subjects, might have made this more a fun read. Of course, the ultimate source of insight on the effect of  alcohol will always be Brad Paisley!

Fritz Renner, Inge Kersbergen, Matt Field, Jessica Werthmann. Dutch courage? Effects of acute alcohol consumption on self-ratings and observer ratings of foreign language skills. Journal of Psychopharmacology, 2017; 026988111773568 DOI: 10.1177/0269881117735687

Friday, October 20, 2017

Bedside manner in (pronunciation) teaching: the BATHE protocol
Sometime the doctor-patient metaphor does work in our work!

Recovering from recent surgery here at home, and especially recalling the wonderful way that I was treated and prepared prior to the operation by the nurse in pre-op, this study, "Inpatient satisfaction improved by five-minute intervention," summarized by Augusta Free Press, published originally in Family Medicine by Pace, Somerville, Enyioha, Allen, J, Lemon and C. Allen of the University of Virginia really hit home, both as an interpersonal framework for dealing with problems in general and (naturally) pronunciation teaching!

The research looked at the effectiveness of a training system for preparing doctors better for talking with patients, bedside manner. In summary, patient satisfaction went up substantially, and time spent per patient generally went down. The acronym for the protocol is BATHE. Below is my paraphrase of what constitutes each phase of the process:

B - Start with getting concise background information with patients
A - Help them talk about how they are feeling (affect)
T - Together, review the problem (trouble)
H - Discuss how the problem is being handled.
E - Confirm your understanding of the situation and how the patient is feeling (empathy).

That is a deceptively elegant protocol. Next time you have a student (or colleague) or friend approach you with a difficult problem, keep that in mind. That also translates beautifully into pronunciation work, especially where there is appropriate attention to the body (like in haptic work, of course!) Here is how the acronym plays out in our work:

B - Start with providing a concise explanation of the target, also eliciting from students what their understanding is of what you'll be working on.
A - Anchor the target sound in a way that learners get a good "felt sense" of it, i.e., awareness and control of the sensations in the vocal track and upper body
T - Together, talk through the "cash value" and functional load of the target and practice the target sound(s) in isolation and context. 
H - Discuss how the student may be handling the problem already, or could, and what you'll do together going forward, including homework and follow up in the classroom in the future.
E - Finally, go back to brief, active, "physical" review and anchoring of the sound, also providing some realistic guidance as to the process of integrating the sound or word into their active speaking, especially the role of consistent, systematic practice.

One remarkable feature of that system, other then the operationalized empathy, of course, is the way it creates a framework for staying focused on the problem and solution. How does that map on to your own "BATHE-side manner?"

Saturday, October 14, 2017

Empathy for strangers: better heard and not seen? (and other teachable moments)

The technique of closing one's eyes to concentrate has both everyday sense and empirical research support. For many, it is common practice in pronunciation and listening comprehension instruction. Several studies of the practice under various conditions have been reported here in the past. A nice 2017 study by Kraus of Yale University, Voice-only communication enhances empathic accuracy, examines the effect from several perspectives.
What the research establishes is that perception of the emotion encoded in the voice of a stranger is more accurately determined with eyes closed, as opposed to just looking at the video or watching the video with sound on. (Note: The researcher concedes in the conclusion that the same effect might not be as pronounced were one listening to the voice of someone we are familiar or intimate with, or were the same experiments to be carried out in some culture other than "North American".) In the study there is no unpacking of just which features of the strangers' speech are being attended to, whether linguistic or paralinguistic, the focus being:
 . . . paradoxically that understanding others’ mental states and emotions relies less on the amount of information provided, and more on the extent that people attend to the information being vocalized in interactions with others.
The targeted effect is statistically significant, well established. The question is, to paraphrase the philosopher Bertrand Russell, does this "difference that makes a difference make a difference?"--especially to language and pronunciation teaching?
How can we use that insight pedagogically? First, of course, is the question of how MUCH better will the closed eyes condition be in the classroom and even if it is initially, will it hold up with repeated listening to the voice sample or conversation? Second, in real life, when do we employ that strategy, either on purpose or by accident? Third, there was a time when radio or audio drama was a staple of popular media and instruction. In our contemporary visual media culture, as reflected in the previous blog post, the appeal of video/multimedia sources is near irresistible. But, maybe still worth resisting?
Especially with certain learners and classes, in classrooms where multi-sensory distraction is a real problem, I have over the years worked successfully with explicit control of visual/auditory attention in teaching listening comprehension and pronunciation. (It is prescribed in certain phases of hapic pronunciation teaching.) My sense is that the "stranger" study actually is tapping into comprehension of new material or ideas, not simply new people/relationships and emotion. Stranger things have happened, eh!
If this is a new concept to you in your teaching, close your eyes and visualize just how you could employ it next week. Start with little bits, for example when you have a spot in a passage of a listening exercise that is expressively very complex or intense. For many, it will be an eye opening experience, I promise!

Kraus, M. (2017). Voice-only communication enhances empathic accuracy, American Psychologist 72(6)344-654.