Imagining a new interface: Hands-free communication without saying a word

Publication/Creation Date
July 30 2019
Facebook (creator)
Edward Chang (contributor)
David Moses (contributor)
University Of California, San Francisco (contributor)
Emily Mugler (contributor)
Michael Abrash (contributor)
Mark Chevillet (contributor)
John Hopkins University (contributor)
University Of Washington School Of Medicine (contributor)
Media Type
Persuasive Intent
At F8 2017, we announced our brain-computer interface (BCI) program, outlining our goal to build a non-invasive, wearable device that lets people type by simply imagining themselves talking. As one part of that work, we’ve been supporting a team of researchers at University of California, San Francisco (UCSF) who are working to help patients with neurological damage speak again by detecting intended speech from brain activity in real time as part of a series of studies. UCSF works directly with all research participants, and all data remains onsite at UCSF. Today, the UCSF team is sharing some of its findings in a Nature Communications article, which provides insight into some of the work they’ve done so far — and how far we have left to go to achieve fully non-invasive BCI as a potential input solution for AR glasses.
HCI Platform
Location on Body
Head, Brain, Eye