• | 6:51 pm

Next-gen neural tech helps give voice to voiceless patients

New neural devices decode brain signals to help paralyzed patients communicate more quickly, effectively

Next-gen neural tech helps give voice to voiceless patients
[Source photo: Chetan Jha/Press Insider]

Scientists have developed artificial intelligence-enhanced technologies that can help people who have lost the ability to speak due to brain issues communicate more quickly and effectively, according to new research.

Two new studies, published in Nature this month, deployed tiny, electrode-laden devices in the part of the brain involved with speech to collect signals. These signals were then processed through deep-learning algorithms trained to make sense of the patterns and, finally, to deliver coherent speech.

The innovative brain-computer interfaces (BCIs) not only decoded neural activity into speech at heightened speeds but also showed improved accuracy and a broader vocabulary compared to existing technologies.

Neurological disorders often result in the loss of speech due to muscular paralysis. Previous studies demonstrated the potential to decode brain-based speech signals, albeit limited to textual outputs and characterized by reduced speed, precision, and vocabulary.

Dr Sudhir Kumar, senior consultant neurologist, Apollo Hospitals in Hyderabad, told Press Insider that the commonest cause of paralysis is stroke, and that it’s also the third leading cause of disability (after heart attack and cancer). One in six people will suffer from a stroke in their lifetime.

“Many people with brain stroke are left with long term disabilities. The most common disabilities include weakness of limbs, speech impairment and memory impairment,” said Dr Kumar.

Dr Kumar said technology like this can be a major boon to people suffering from aphasia — a language disorder caused by damage in a specific area of the brain that controls language expression and comprehension.

“It would allow people to communicate better with each other. It would also help in early occupational rehabilitation, and they would be able to join their jobs sooner,” said Dr Kumar.

One compelling result in the study involved a patient with amyotrophic lateral sclerosis (ALS) who, with the aid of the new device, achieved an average communication rate of 62 words per minute. This feat nearly approaches the pace of natural conversation of about 160 words per minute.

The BCI achieved a mere 9.1% word error rate for a 50-word vocabulary — a 2.7-fold reduction in errors compared to the prior state-of-the-art speech BCI from 2021.

In neuromuscular disorders such as ALS and myasthenia gravis, the brain is in perfect condition and can formulate words and sentences but due to the weakness of muscles, the speech can be slurred and incoherent.

Dr Kumar suggests that for people with paralytic disorders such as these, this technology can significantly improve their ability to communicate.

In the second study, the team used a non-invasive technology called electrocorticography in which scientists placed a thin rectangle with 253 nodes on the brain’s cortex to capture the activity of numerous cells. This BCI decodes brain signals into three outputs: textual representation, audible speech, and a speaking avatar.

By training a deep-learning model on neural data collected from a patient with severe paralysis caused by a brainstem stroke, the researchers achieved significant outcomes.

Brain-to-text translation occurred at a median rate of 78 words per minute—a 4.3-fold enhancement over the previous record. Moreover, the researchers harnessed the potential of real-time sentence decoding with an extensive vocabulary of over 1,000 words.

The BCI’s versatility extended to translating brain signals into intelligible synthesized speech sounds, allowing untrained listeners to comprehend the content with a 28% error rate for a set of 529 phrases.

Progress in BCIs has accelerated in recent years, Dr Kumar said. “BCIs can help people with Parkinson’s disease, where the affected person’s voice becomes hypophonic (low volume and not clear), diseases of cerebellum, where patients can have dysarthria (slurred speech).”

The innovative BCI also offered the ability to decode neural signals into facial movements and non-verbal expressions of an avatar during speech. The stability and high-performance decoding demonstrated over extended periods underscore the durability of this advancement.

“Communication is not only via words and sentences. Facial expression using eyes, and face too convey a lot of meaning. So, non-verbal expressions in the form of an avatar would greatly enhance the quality of communication,” Dr Kumar said.

ABOUT THE AUTHOR

Shireen Khan is a Senior Correspondent at Press Insider. She covers lifestyle, culture, and health. More

More Top Stories: