Brain implant to vocalize thoughts

- Posted by

For ALS patients, failing to access a cure, it is important to overcome certain important problems. Alas a chair that can be controlled by gaze, telemanipulation arms, intelligent ventilation, and a device for vocalizing by thought, will not be marketed for another decade or two.

The press echoed two recent articles about devices for vocalizing through thought. One acquires the data via a deep cranial implant, the other via a surface cranial implant. enter image description here I analyze below the first article because the data and programs to implement this experiment are publicly available. Which is apparently not the case for the second.

I asked one of the authors if it was possible to use his program with an EEG helmet. We'll see what his answer will be, but I anticipate that the results of this course of action will be very disappointing.

The study focused on understanding how facial movement and speech production are organized in the motor cortex at the level of individual neurons. Neural activity was recorded from microelectrode arrays implanted in the brain of a participant with amyotrophic lateral sclerosis (ALS) who had limited facial movements and an ability to vocalize but not produce intelligible speech.

The results indicated neural solid agreement on various facial movements in a region of the brain called area 6v, and this activity was very distinct for different movements. In contrast, area 44, traditionally associated with speech production, appears (in this experiment) to contain little information about facial actions or speech.

The ventral premotor cortex, which is located in the middle of the upper part of the brain has been involved in motor vocabularies in both speech and manual gestures. A recent prospective fMRI study demonstrated adaptation effects in the ventral premotor cortex to repeating syllables.

Broca's area, is a region in the frontal lobe of the dominant hemisphere, usually the left, of the brain with functions linked to speech production.

The researchers developed a decoder using a recurrent neural network (RNN) to translate neural activity into speech. The participant attempted to speak sentences and the RNN decoded the predicted words in real-time, achieving a word error rate of 9.1% for a vocabulary of 50 words and 23.8% for a vocabulary of 125 000 words. This demonstrated the feasibility of decoding speech attempts using neural signals.

The neural representation of speech sounds in the brain was analyzed, showing that the activity patterns reflected the articulatory features of the phonemes. This suggests that even after years of paralysis, the detailed articulatory code of phonemes remains preserved in the brain.

The study also discussed design considerations for improving the accuracy of speech brain-computer interfaces (BCIs), including vocabulary size, number of electrodes used, and size of the training data set. The researchers noted that while their results were promising, there was still room for further optimization and improvements in the technology.

Overall, the study presented a proof of concept for a speech BCI that could potentially enable people with severe motor impairments to communicate more effectively by translating their intended speech into text from neural signals.



Please, help us continue to provide valuable information: