Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision
Publication/Creation DateOctober 14 2020
Humans learn language by listening, speaking, writing, reading, and also, via interaction with the multimodal real world. Existing language pre-training frameworks show the effectiveness of text-only self-supervision while we explore the idea of a visually-supervised language model in this paper. We find that the main reason hindering this exploration is the large divergence in magnitude and distributions between the visually-grounded language datasets and pure-language corpora. Therefore, we develop a technique named “vokenization” that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images (which we call “vokens”). The “vokenizer” is trained on relatively small image captioning datasets and we then apply it to generate vokens for large language corpora. Trained with these contextually generated vokens, our visually-supervised language models show consistent improvements over self-supervised alternatives on multiple purelanguage tasks such as GLUE, SQuAD, and SWAG.
Date archivedNovember 10 2020
Last editedNovember 10 2020
How to cite this entry
Hao Tan, Mohit Bansal, University of North Carolina. (October 14 2020). "Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision". ArXiv. Cornell University. Fabric of Digital Life. https://fabricofdigitallife.com/Detail/objects/4996