Don’t miss the latest developments in business and finance.

Scientists create speech from brain signals

The new system deciphers the brain's motor commands, guiding vocal movement during speech

Scientists create speech from brain signals
Benedict Carey | NYT
3 min read Last Updated : Apr 28 2019 | 12:34 AM IST
In my head, I churn over every sentence ten times, delete a word, add an adjective, and learn my text by heart, paragraph by paragraph,” wrote Jean-Dominique Bauby in his memoir, “The Diving Bell and the Butterfly.” In the book,  Bauby, a journalist and editor, recalled his life before and after a paralysing stroke that left him virtually unable to move a muscle; he tapped out the book letter by letter, by blinking an eyelid.

Thousands of people are reduced to similarly painstaking means of communication as a result of injuries suffered in accidents or combat, of strokes, or of neurodegenerative disorders such as amyotrophic lateral sclerosis, or ALS, that disable the ability to speak.

Now, scientists are reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth. (The physicist and author Stephen Hawking used a muscle in his cheek to type keyboard characters, which a computer synthesised into speech.)

“It’s formidable work, and it moves us up another level toward restoring speech” by decoding brain signals, said Anthony Ritaccio, a neurologist and neuroscientist at the Mayo Clinic in Jacksonville, Fla., who was not a member of the research group.

Researchers have developed other virtual speech aids. Those work by decoding the brain signals responsible for recognising letters and words, the verbal representations of speech. But those approaches lack the speed and fluidity of natural speaking.

The new system, described on Wednesday in the journal Nature, deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence.

Experts said the new work represented a “proof of principle,” a preview of what may be possible after further experimentation and refinement. The system was tested on people who speak normally; it has not been tested in people whose neurological conditions or injuries, such as common strokes, could make the decoding difficult or impossible.

For the new trial, scientists at the University of California, San Francisco, and U C Berkeley recruited five people who were in the hospital being evaluated for epilepsy surgery.

Many people with epilepsy do poorly on medication and opt to undergo brain surgery. Before operating, doctors must first locate the “hot spot” in each person’s brain where the seizures originate; this is done with electrodes that are placed in the brain, or on its surface, and listen for telltale electrical storms.


©2019 The New York Times News Service


Next Story