If you like this blog, send it to 3 (or more) of your friends and encourage them to sign up. Let’s keep the conversation going!
I need to start this post with a disclaimer that may surprise you: I am not Christopher Hitchens. First of all, our politics are very different. More importantly for purposes of this
post, Hitchens wrote a moving and erudite piece of Vanity Fair about the impact of losing his speaking voice to cancer. It’s worth reading: http://www.vanityfair.com/culture/features/2011/06/christopher-hitchens-unspoken-truths-201106
ALS is robbing me of my speaking voice. For me, the onset of this damned progressive disease was with my speech: I started to have trouble enunciating words clearly. (That’s how ALS starts in about 1/3 of people with the disease.) I’m now in a situation where it is most often very hard for me to project my voice beyond a whisper.
Followers of this blog know that some time ago I began using a voice amplifier (See: “There Goes That Person With . . .”). That doesn’t help so much now. But, fortunately, there are other technologies.
Many people with many different kinds of disabilities have known for a long time that technology makes continuing to be — and engage — in the world a possibility. For folks who have trouble talking, one form the technology takes is programs that convert text to speech. I use a program called NeoKate, a free app for the iPad. I type in the text, tap on “done” and then Kate “says” what I typed.
Kate’s English is pretty good, but she does have problems with some words, such as proper names and words that might have multiple pronunciations, like “read.” So, occasionally, after Kate attempts to pronounce what I’ve typed, I have to retype using phonetic spellings to get the words spoken correctly.
The program allows me to adjust Kate’s speed (so I can make her sound like a native New Yorker if I want to), her pitch, and her volume. And if I want to save something I typed so I can have Kate speak it again later, there’s a library where I can save files, with names I choose so I can find the files again later.
How Kate and I Behave in Company
In two-way conversations, using Kate to communicate slows things down a bit, since it always takes longer to type than to speak. It means there are longer pauses than there would be if two humans were talking to each other directly (without text-to-voice assistance). But it also means, at least so far in my experience, that people pay close attention to what Kate is saying for me, and it makes me listen more carefully to what’s being said to me.
In conversations involving more than two people, Kate and I have a tougher time. You may have noticed that conversations often go off on tangents, rather than following a straight line. You may also have noticed that sometimes people talk over each other — one person starts to say something before someone else finishes. When you add Kate to this mix, her part of the conversation can become quite disjointed. By the time I’ve finished typing a response to something someone said, the conversation may well have taken 3 other turns. When Kate says what I’ve typed, people have to stop to remember what the topic was that prompted the comment, and get drawn back to a part of the conversation they thought was finished.
The delay has real impact on conversations. In a way, there is more continuity, in a disjointed way, than non-technologically driven chats. Makes me think about those old ads from the brokerage firm EF Hutton: When Kate, talk, people listen.
Can We Banter with Technology?
The m.o. for much conversation in my circles is banter: a statement is made, someone responds with a quip, and someone else answers the quip with another quip or statement or story. I love to banter, but my speaking ability won’t let me do that anymore, and Kate’s ability is limited by the time it takes for me to tell her what to say.
A friend of mine has suggested that we could even the conversational playing field by requiring everyone in the conversation to use Kate to communicate. That might be a little
extreme, but we might all listen to each other better if we did something like this. On the other hand, we might all get so involved in typing our own stuff that we won’t listen to other peoples’.
Given how ALS progresses, at some point (hopefully in the far distant future), it will become harder for me type fast, or at all. Then, assuming (as I do) that I will continue to want to communicate, I will use technology that allows me to type by moving my eyes over characters on the computer screen. This process will inevitably be slower than the NeoKate method. The concept of “super slo-mo” will take on a whole new dimension. And the ability of other people to listen in this mode might be sorely tested.
So, how I communicate in conversation with others – both how we talk and how well we listen – will be a growing issue for me. If it’s an issue for me, it’s an issue for everyone with whom I come in contact, and everyone else who is in the place of losing their speaking voices.
You may have noticed that this blog post is less about health policy – my usual topic — and more about social behavior. How we talk to each other affects how we hear each other, no matter what our limitations are or may become.
Think about it.