Health

AI-powered technology gives a voice to Regina woman who lost speech after suffering a stroke

In 2005, at the age of 30, Ann Johnson suffered a rare brainstem stroke that left her unable to speak. Since then, the Saskatchewan mother of two has been unable to communicate naturally with her loved ones.

But last week she was spotted making history in a California lab chatting with her husband, thanks to an AI-powered technology that can convert their brain signals into speech. The researchers also synthesized a voice that sounds like their own and displayed a digital face that they could use to convey feelings and expressions.

“The goal of all of this is simply to restore more comprehensive communication,” said Dr. Eddie Chang, neurosurgeon at the University of California, San Francisco.

Johnson, who was a teacher at the time of her stroke, was diagnosed with what is known as locked-in syndrome, a brain disorder that causes paralysis of almost all muscles except those that control the eyes. Since then she has moved somewhat, but cannot speak.

The researchers surgically attached a thin grid of electrodes to the surface of their brains. Covering the vital range for language, it was connected by a cable to computers trained to decode their brain activity and generate not just text, but speech and facial expressions as well.

“We also decoded the audio directly, so the sound of what they want to say. Then we actually decoded facial movements to animate an avatar,” said researcher Sean Metzger.

In a video-captured exchange, Johnson was able to chat about baseball with her husband, Bill. He asked her how she felt about the Blue Jays' chances of winning that day.

“Anything is possible,” Johnson replied in her synthetic voice.

Bill then commented on her lack of confidence.

“You're right. I guess we'll see, don't we?” Johnson said, turning to her husband silently and smiling.

Researchers used audio recordings of a speech she gave at her wedding before the stroke to teach a computer to synthesize her voice. She said it was like listening to an old friend.

The researchers clarified that the Brain Computer Interface (BCI) is not about reading minds, but about interpreting signals from the part of the brain that controls the vocal tract. The user must actually try to pronounce the words for it to work.

“This isn't about someone trying to imagine, say something, or think about words, but actually trying to say them,” Chang said.

The interface can decode about 78 words per minute, which is about half the speed of a typical conversation. That's a huge improvement over the 14 words per minute Johnson can produce with her current assistive communication device, which uses eye movements.

Johnson regularly traveled from Regina to California with her husband to participate in the multi-year clinical trial. A GoFundMe page A relief fund was set up to cover travel expenses. On the website of the high school where she used to teach, Johnson expressed a desire to give back and help others.

Researchers say this is just the starting point. They hope to make smaller devices that can connect wirelessly to computers to help others with speech disabilities from stroke, ALS, or other neurological disorders.

They wish more people were able to communicate with their loved ones again.

“I think all of that will be possible in the next few years,” says Chang.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button