Sparse overcomplete word vector representations

M Faruqui, Y Tsvetkov, D Yogatama, C Dyer… - arXiv preprint arXiv …, 2015 - arxiv.org
Current distributed representations of words show little resemblance to theories of lexical
semantics. The former are dense and uninterpretable, the latter largely based on familiar,
discrete classes (eg, supersenses) and relations (eg, synonymy and hypernymy). We
propose methods that transform word vectors into sparse (and optionally binary) vectors.
The resulting representations are more similar to the interpretable features typically used in
NLP, though they are discovered automatically from raw corpora. Because the vectors are …

[PDF][PDF] Sparse overcomplete word vector representations

M Yogatama, C Smith - ACL, 2015 - homes.cs.washington.edu
Current distributed representations of words show little resemblance to theories of lexical
semantics. The former are dense and uninterpretable, the latter largely based on familiar,
discrete classes (eg, supersenses) and relations (eg, synonymy and hypernymy). We
propose methods that transform word vectors into sparse (and optionally binary) vectors.
The resulting representations are more similar to the interpretable features typically used in
NLP, though they are discovered automatically from raw corpora. Because the vectors are …
以上显示的是最相近的搜索结果。 查看全部搜索结果