Our experience of the world is multimodal-we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is …
K Ethayarajh - arXiv preprint arXiv:1909.00512, 2019 - arxiv.org
Replacing static word embeddings with contextualized word representations has yielded significant improvements on many NLP tasks. However, just how contextual are the …
Y Wang, W Huang, F Sun, T Xu… - Advances in neural …, 2020 - proceedings.neurips.cc
Deep multimodal fusion by using multiple sources of data for classification or regression has exhibited a clear advantage over the unimodal counterpart on various applications. Yet …
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates. Despite the incredible …
R Speer, J Chin, C Havasi - Proceedings of the AAAI conference on …, 2017 - ojs.aaai.org
Abstract Machine learning about language can be improved by supplying it with specific knowledge and sources of external information. We present here a new version of the linked …
Extensive evaluation on a large number of word embedding models for language processing applications is conducted in this work. First, we introduce popular word …
Cross-lingual representations of words enable us to reason about word meaning in multilingual contexts and are a key facilitator of cross-lingual transfer when developing …
Models that represent meaning as high-dimensional numerical vectors—such as latent semantic analysis (LSA), hyperspace analogue to language (HAL), bound encoding of the …
O Press, L Wolf - arXiv preprint arXiv:1608.05859, 2016 - arxiv.org
We study the topmost weight matrix of neural network language models. We show that this matrix constitutes a valid word embedding. When training language models, we recommend …