“After 18 years, AI enables a 47-year-old stroke patient to regain speech.”

Utilizing revolutionary medical technology, an immensely paralyzed woman has achieved the remarkable ability to communicate through a digital avatar, a feat made possible by the translation of her brain signals into speech and facial expressions. This achievement is a beacon of hope for individuals who have lost their communication capabilities due to conditions such as strokes and ALS.In the past, patients in similar situations relied on sluggish speech synthesizers, often requiring intricate eye tracking or subtle facial movements to piece together words, resulting in cumbersome and unnatural conversations.

Thank you for reading this post, don't forget to subscribe!

As depicted in a video shared by UC San Francisco (UCSF) on YouTube, the novel technology employs minute electrodes positioned on the surface of the brain. These electrodes capture the brain’s electrical activity within the regions responsible for controlling speech and facial expressions. In real-time, these signals are transformed into spoken words and corresponding facial gestures by a digital avatar, allowing for a range of emotions including smiles, frowns, and expressions of surprise.The recipient of this groundbreaking innovation, a 47-year-old named Ann, has been grappling with severe paralysis for more than 18 years following a brainstem stroke.

This debilitating condition left her unable to communicate verbally or through typing. Previously, her communication was restricted to a sluggish pace of up to 14 words per minute, facilitated by motion-tracking technology. With the advent of the new avatar technology, Ann now envisions pursuing a career as a counselor.The research team achieved this breakthrough by implanting 253 paper-thin electrodes onto Ann’s brain surface, precisely targeting areas linked to speech production. These electrodes intercepted the brain’s signals that would have governed movements of the tongue, jaw, larynx, and facial muscles were it not for the stroke.

Following the electrode implantation, Ann collaborated closely with the team to train an AI algorithm capable of recognizing the unique brain signals corresponding to various speech sounds. The algorithm learned to distinguish among 39 distinct sounds, and a language model similar to ChatGPT converted these signals into coherent sentences. These sentences were then employed to control an avatar, closely resembling Ann’s voice from before her injury, as recorded during her wedding.While not without imperfections, boasting a 28 percent word decoding error rate and a brain-to-text rate of 78 words per minute (in contrast to the average natural conversation speed of 110-150 spoken words per minute), these advancements carry tangible significance for patients.Professor Edward Chang, who spearheaded this initiative at the University of California, San Francisco (UCSF), shared, “Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others. These advancements bring us much closer to making this a real solution for patients.”

Leave a Comment

Your email address will not be published. Required fields are marked *

Amazon India Today's Deals

Scroll to Top