Creating Gaze Annotations in Head Mounted Displays


Publication/Creation Date
September 11 2015
Creators/Contributors
Diako Mardenbegi (creator)
Pernilla Qvarfordt (creator)
IT University Of Copenhagen (contributor)
FX Palo Alto Laboratory (contributor)
Persuasive Intent
Academic
Description
To facilitate distributed communication in mobile settings, GazeNote is developed for creating and sharing gaze annotations in head mounted displays (HMDs). With gaze annotations it possible to point out objects of interest within an image and add a verbal description. To create an annotation, the user simply captures an image using the HMD’s camera, looks at an object of interest in the image, and speaks out the information to be associated with the object. The gaze location is recorded and visualized with a marker. The voice is transcribed using speech recognition. Gaze annotations can be shared. Our study showed that users found that gaze annotations add precision and expressive- ness compared to annotations of the image as a whole.
HCI Platform
Wearables
Discursive Type
Inventions
Location on Body
Head, Eye

Date archived
October 14 2015
Last edited
October 31 2018
How to cite this entry
Diako Mardenbegi, Pernilla Qvarfordt. (September 11 2015). "Creating Gaze Annotations in Head Mounted Displays". 2015 ACM International Symposium on Wearable Computers (ISWC 2015). International Symposium On Wearable Computers. Fabric of Digital Life. https://fabricofdigitallife.com/index.php/Detail/objects/1276