Seed the views: Hierarchical semantic alignment for contrastive representation learning

H Xu, X Zhang, H Li, L Xie, W Dai… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Self-supervised learning based on instance discrimination has shown remarkable progress.
In particular, contrastive learning, which regards each image as well as its augmentations as …

Unsupervised learning of visual features by contrasting cluster assignments

M Caron, I Misra, J Mairal, P Goyal… - Advances in neural …, 2020 - proceedings.neurips.cc
Unsupervised image representations have significantly reduced the gap with supervised
pretraining, notably with the recent achievements of contrastive learning methods. These …

L-dawa: Layer-wise divergence aware weight aggregation in federated self-supervised visual representation learning

YAU Rehman, Y Gao… - Proceedings of the …, 2023 - openaccess.thecvf.com
The ubiquity of camera-enabled devices has led to large amounts of unlabeled image data
being produced at the edge. The integration of self-supervised learning (SSL) and federated …

Vision mamba: Efficient visual representation learning with bidirectional state space model

L Zhu, B Liao, Q Zhang, X Wang, W Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Recently the state space models (SSMs) with efficient hardware-aware designs, ie, the
Mamba deep learning model, have shown great potential for long sequence modeling …

A contrastive objective for learning disentangled representations

J Kahana, Y Hoshen - European Conference on Computer Vision, 2022 - Springer
Learning representations of images that are invariant to sensitive or unwanted attributes is
important for many tasks including bias removal and cross domain retrieval. Here, our …

Boosting discriminative visual representation learning with scenario-agnostic mixup

S Li, Z Liu, Z Wang, D Wu, Z Liu, SZ Li - arXiv preprint arXiv:2111.15454, 2021 - arxiv.org
Mixup is a well-known data-dependent augmentation technique for DNNs, consisting of two
sub-tasks: mixup generation and classification. However, the recent dominant online training …

Revisiting contrastive methods for unsupervised learning of visual representations

W Van Gansbeke, S Vandenhende… - Advances in …, 2021 - proceedings.neurips.cc
Contrastive self-supervised learning has outperformed supervised pretraining on many
downstream tasks like segmentation and object detection. However, current methods are …

Towards demystifying representation learning with non-contrastive self-supervision

X Wang, X Chen, SS Du, Y Tian - arXiv preprint arXiv:2110.04947, 2021 - arxiv.org
Non-contrastive methods of self-supervised learning (such as BYOL and SimSiam) learn
representations by minimizing the distance between two views of the same image. These …

Demystifying contrastive self-supervised learning: Invariances, augmentations and dataset biases

S Purushwalkam, A Gupta - Advances in Neural …, 2020 - proceedings.neurips.cc
Self-supervised representation learning approaches have recently surpassed their
supervised learning counterparts on downstream tasks like object detection and image …

Eva: Exploring the limits of masked visual representation learning at scale

Y Fang, W Wang, B Xie, Q Sun, L Wu… - Proceedings of the …, 2023 - openaccess.thecvf.com
We launch EVA, a vision-centric foundation model to explore the limits of visual
representation at scale using only publicly accessible data. EVA is a vanilla ViT pre-trained …