In 2021, researchers used the brainwaves of a paralyzed man who couldn't speak and turned the things he wanted to say into sentences displayed on a computer screen.
Today, people unable to speak or write due to paralysis have very limited ways for communication. There are devices which detect a patient's eye movements or enables him to select words or letters on a screen by moving his head. However, it is very slow and an insufficient replacement for speech.
In recent years, experiments with mind-controlled prosthetics have allowed paralyzed patients to move their artificial limbs upon imagining the intended movement which is followed by those brain signals being transmitted through a computer to the artificial limb.
However, the team of Edward Chang, a neurosurgeon at University of California, made a development that has the potential to revolutionise this area
. They built a"speech neuroprosthetic" which is a device that decodes brainwaves that normally control the vocal tract, the little muscle movements of the lips, jaw, tongue and larynx that play a role in forming each consonant and vowel.
Years ago, the team has started by temporarily placing electrodes in the brains of volunteers undergoing surgery for epilepsy in order to match brain activity to spoken words.
Then, for this development, the researchers implanted electrodes on the surface of the area controlling speech of the brain of the volunteer who had widespread paralysis and couldn't speak.
They started by having him say specific sentences rather than answering open-ended questions until the machine analyzing the patterns translated mostly accurately. The computer eventually learnt to differentiate between 50 words that could mean more than 1000 sentences from the brainwaves of the volunteer. Prompted with questions like "How are you today?" and "Are you thirsty?", the device enabled the man to answer "I am very good" or "No I am not thirsty", translating them into text.
It was determined that it took about three to four seconds for the word to appear on the screen after the volunteer tries to say it. It might not be as fast as speaking but it is faster than tapping out a response.
Harvard neurologists Leigh Hochberg and Sydney Cash described the work as a "pioneering demonstration", suggesting that upon development the technology can help people with injuries, strokes or other illnesses whose “brains prepare messages for delivery but those messages are trapped”.
Future steps include improvement of device speed, accuracy and vocabulary size and possibly allowing users to communicate with a voice generated by a computer rather than a text on a screen.
Edited by: İdil Ada Aydos
For more information on Chang's preliminary research: https://www.theguardian.com/science/2019/jul/30/neuroscientists-decode-brain-speech-signals-into-actual-sentences
Comments