A newly developed brain-computer interface is offering renewed hope to individuals who have lost the ability to speak.
Scientists at the University of California, Davis, have unveiled a technology that can instantly convert brain signals into speech.
In a recent demonstration, a participant with amyotrophic lateral sclerosis (ALS) was able to “speak” in real time using a computer. He even managed to vary his tone and sing short melodies.
What distinguishes this innovation from previous assistive technologies is its ability to create natural, real-time conversations. “Compared to past systems, this feels more like a voice call,” said Sergey Stavisky, senior author of the study published in Nature and an assistant professor in UC Davis's Department of Neurological Surgery.
By allowing for immediate translation of brain activity, the system helps users feel more actively involved in conversations.
“They can interject when they want, and others are less likely to cut them off by mistake,” Stavisky explained.
The study participant used an investigational brain-computer interface that had been surgically implanted. This interface includes four microelectrode arrays placed in the part of the brain responsible for speech. These sensors record brain activity and send it to a computer, which interprets the signals and recreates the intended speech.
“Our algorithms match neural signals to the sounds a person is trying to produce at any given moment. This allows for expressive speech synthesis and gives the user control over the rhythm and tone of the voice,” said Maitreyee Wairagkar, lead author and project scientist in the Neuroprosthetics Lab at UC Davis.
The system's response time is extremely fast, with a delay of just one-fortieth of a second. This breakthrough brings new possibilities for people who are unable to speak.
Found this article interesting? Follow us on X(Twitter) ,Threads and FaceBook to read more exclusive content we post.