Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision


Publication Title
ArXiv
Publication/Creation Date
October 14 2020
Creators/Contributors
Hao Tan (creator)
Mohit Bansal (creator)
University Of North Carolina (creator)
Media Type
Journal Article
Persuasive Intent
Academic
Discursive Type
Inventions
Description
Abstract:

Humans learn language by listening, speaking, writing, reading, and also, via interaction with the multimodal real world. Existing language pre-training frameworks show the effectiveness of text-only self-supervision while we explore the idea of a visually-supervised language model in this paper. We find that the main reason hindering this exploration is the large divergence in magnitude and distributions between the visually-grounded language datasets and pure-language corpora. Therefore, we develop a technique named “vokenization” that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images (which we call “vokens”). The “vokenizer” is trained on relatively small image captioning datasets and we then apply it to generate vokens for large language corpora. Trained with these contextually generated vokens, our visually-supervised language models show consistent improvements over self-supervised alternatives on multiple purelanguage tasks such as GLUE, SQuAD, and SWAG.
HCI Platform
Other
Source
https://arxiv.org/abs/2010.06775

Date archived
November 10 2020
Last edited
July 5 2021
How to cite this entry
Hao Tan, Mohit Bansal, University of North Carolina. (October 14 2020). "Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision". ArXiv. Cornell University. Fabric of Digital Life. https://fabricofdigitallife.com/Detail/objects/4996