Synthetic speech generated from brain recordings


Publication/Creation Date
April 25 2019
Creators/Contributors
Edward Chang (creator)
Gopala Anumanchipalli (creator)
Josh Chartier (creator)
Media Type
Corporate Video
Persuasive Intent
Academic
Description
Assistive devices can enable people with paralysis to type sentences letter by letter at up to 10 words per minute but that's a far cry from everyday conversations which take place at about 150 words per minute.

New research from UC San Francisco shows it's possible to generate synthesized speech directly from brain signals. Speech is complex. Bain signals precisely coordinate nearly 100 muscles to move the lips, jaw, tongue, and larynx, shaping our breath into sounds that form our words and sentences.

In the UCSF study, volunteer participants being treated for epilepsy read sentences aloud while electrodes placed on the surface of their brains measured the resulting signals. Computational models based on that data enabled the researchers to decode how activity patterns in the brain's speech centers contribute to particular movements of the vocal tract. These simulated vocal tract movements were transformed into sounds to generate intelligible synthesized speech.
HCI Platform
Wearables
Discursive Type
Inventions
Location on Body
Brain
Marketing Keywords
Source
https://www.youtube.com/watch?v=3pv0vT82Cys
https://www.ucsf.edu/news/2019/04/414296/synthetic-speech-generated-brain-recordings

Date archived
April 25 2019
Last edited
April 25 2019
How to cite this entry
Edward Chang, Gopala Anumanchipalli, Josh Chartier. (April 25 2019). "Synthetic speech generated from brain recordings". University of California, San Francisco. Fabric of Digital Life. https://fabricofdigitallife.com/index.php/Detail/objects/3813