Towards the generalization of contrastive self-supervised learning

W Huang, M Yi, X Zhao, Z Jiang - arXiv preprint arXiv:2111.00743, 2021 - arxiv.org
Recently, self-supervised learning has attracted great attention, since it only requires
unlabeled data for model training. Contrastive learning is one popular method for self …

Complexity matters: Rethinking the latent space for generative modeling

T Hu, F Chen, H Wang, J Li… - Advances in Neural …, 2024 - proceedings.neurips.cc
In generative modeling, numerous successful approaches leverage a low-dimensional
latent space, eg, Stable Diffusion models the latent space induced by an encoder and …

Explore and exploit the diverse knowledge in model zoo for domain generalization

Y Chen, T Hu, F Zhou, Z Li… - … Conference on Machine …, 2023 - proceedings.mlr.press
The proliferation of pretrained models, as a result of advancements in pretraining
techniques, has led to the emergence of a vast zoo of publicly available models. Effectively …

Rethinking weak supervision in helping contrastive learning

J Cui, W Huang, Y Wang… - … Conference on Machine …, 2023 - proceedings.mlr.press
Contrastive learning has shown outstanding performances in both supervised and
unsupervised learning, and has recently been introduced to solve weakly supervised …

Information flow in self-supervised learning

Z Tan, J Yang, W Huang, Y Yuan, Y Zhang - arXiv preprint arXiv …, 2023 - arxiv.org
In this paper, we provide a comprehensive toolbox for understanding and enhancing self-
supervised learning (SSL) methods through the lens of matrix information theory …

Contrastive learning is spectral clustering on similarity graph

Z Tan, Y Zhang, J Yang, Y Yuan - arXiv preprint arXiv:2303.15103, 2023 - arxiv.org
Contrastive learning is a powerful self-supervised learning method, but we have a limited
theoretical understanding of how it works and why it works. In this paper, we prove that …

ArCL: enhancing contrastive learning with augmentation-robust representations

X Zhao, T Du, Y Wang, J Yao, W Huang - arXiv preprint arXiv:2303.01092, 2023 - arxiv.org
Self-Supervised Learning (SSL) is a paradigm that leverages unlabeled data for model
training. Empirical studies show that SSL can achieve promising performance in distribution …

Deciphering the projection head: Representation evaluation self-supervised learning

J Ma, T Hu, W Wang - arXiv preprint arXiv:2301.12189, 2023 - arxiv.org
Self-supervised learning (SSL) aims to learn intrinsic features without labels. Despite the
diverse architectures of SSL methods, the projection head always plays an important role in …

Matrix information theory for self-supervised learning

Y Zhang, Z Tan, J Yang, W Huang, Y Yuan - arXiv preprint arXiv …, 2023 - arxiv.org
The maximum entropy encoding framework provides a unified perspective for many non-
contrastive learning methods like SimSiam, Barlow Twins, and MEC. Inspired by this …

Unsupervised visualization of image datasets using contrastive learning

JN Böhm, P Berens, D Kobak - arXiv preprint arXiv:2210.09879, 2022 - arxiv.org
Visualization methods based on the nearest neighbor graph, such as t-SNE or UMAP, are
widely used for visualizing high-dimensional data. Yet, these approaches only produce …