Unsupervised image representations have significantly reduced the gap with supervised pretraining, notably with the recent achievements of contrastive learning methods. These …
YAU Rehman, Y Gao… - Proceedings of the …, 2023 - openaccess.thecvf.com
The ubiquity of camera-enabled devices has led to large amounts of unlabeled image data being produced at the edge. The integration of self-supervised learning (SSL) and federated …
Recently the state space models (SSMs) with efficient hardware-aware designs, ie, the Mamba deep learning model, have shown great potential for long sequence modeling …
J Kahana, Y Hoshen - European Conference on Computer Vision, 2022 - Springer
Learning representations of images that are invariant to sensitive or unwanted attributes is important for many tasks including bias removal and cross domain retrieval. Here, our …
Mixup is a well-known data-dependent augmentation technique for DNNs, consisting of two sub-tasks: mixup generation and classification. However, the recent dominant online training …
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection. However, current methods are …
Non-contrastive methods of self-supervised learning (such as BYOL and SimSiam) learn representations by minimizing the distance between two views of the same image. These …
Self-supervised representation learning approaches have recently surpassed their supervised learning counterparts on downstream tasks like object detection and image …
We launch EVA, a vision-centric foundation model to explore the limits of visual representation at scale using only publicly accessible data. EVA is a vanilla ViT pre-trained …