Synthetic speech generated from brain recordings
Publication/Creation DateApril 25 2019
DescriptionAssistive devices can enable people with paralysis to type sentences letter by letter at up to 10 words per minute but that's a far cry from everyday conversations which take place at about 150 words per minute.
New research from UC San Francisco shows it's possible to generate synthesized speech directly from brain signals. Speech is complex. Bain signals precisely coordinate nearly 100 muscles to move the lips, jaw, tongue, and larynx, shaping our breath into sounds that form our words and sentences.
In the UCSF study, volunteer participants being treated for epilepsy read sentences aloud while electrodes placed on the surface of their brains measured the resulting signals. Computational models based on that data enabled the researchers to decode how activity patterns in the brain's speech centers contribute to particular movements of the vocal tract. These simulated vocal tract movements were transformed into sounds to generate intelligible synthesized speech.
, Brain Function
, Amyotrophic Lateral Sclerosis (ALS)
Date archivedApril 25 2019
Last editedMay 29 2019