Representation learning via consistent assignment of views to clusters

T Silva, AR Rivera - Proceedings of the 37th ACM/SIGAPP Symposium …, 2022 - dl.acm.org
Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing, 2022dl.acm.org
We introduce Consistent Assignment for Representation Learning (CARL), an unsupervised
learning method to learn visual representations by combining ideas from self-supervised
contrastive learning and deep clustering. By viewing contrastive learning from a clustering
perspective, CARL learns unsupervised representations by learning a set of general
prototypes that serve as energy anchors to enforce different views of a given image to be
assigned to the same prototype. Unlike contemporary work on contrastive learning with …
We introduce Consistent Assignment for Representation Learning (CARL), an unsupervised learning method to learn visual representations by combining ideas from self-supervised contrastive learning and deep clustering. By viewing contrastive learning from a clustering perspective, CARL learns unsupervised representations by learning a set of general prototypes that serve as energy anchors to enforce different views of a given image to be assigned to the same prototype. Unlike contemporary work on contrastive learning with deep clustering, CARL proposes to learn the set of general prototypes in an online fashion, using gradient descent without the necessity of using non-differentiable algorithms or K-Means to solve the cluster assignment problem. CARL surpasses its competitors in many representations learning benchmarks, including linear evaluation, semi-supervised learning, and transfer learning.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References