With the remarkable progress of deep neural networks in computer vision, data mixing augmentation techniques are widely studied to alleviate problems of degraded …
The downstream accuracy of self-supervised methods is tightly linked to the proxy task solved during training and the quality of the gradients extracted from it. Richer and more …
Cluster discrimination is an effective pretext task for unsupervised representation learning, which often consists of two phases: clustering and discrimination. Clustering is to assign …
In computer vision, pre-training models based on large-scale supervised learning have been proven effective over the past few years. However, existing works mostly focus on …
H Fan, P Liu, M Xu, Y Yang - Ieee transactions on cybernetics, 2021 - ieeexplore.ieee.org
The superiority of deeply learned representations relies on large-scale labeled datasets. However, annotating data are usually expensive or even infeasible in some scenarios. To …
Data mixing augmentation have proved to be effective for improving the generalization ability of deep neural networks. While early methods mix samples by hand-crafted policies …
P Chen, S Liu, J Jia - … of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Unsupervised representation learning with contrastive learning achieves great success recently. However, these methods have to duplicate each training batch to construct …
Self-supervised learning aims to learn representations from the data itself without explicit manual supervision. Existing efforts ignore a crucial aspect of self-supervised learning-the …
S Mo, Y Wang, X Luo, D Li - arXiv preprint arXiv:2402.17406, 2024 - arxiv.org
Visual Prompt Tuning (VPT) techniques have gained prominence for their capacity to adapt pre-trained Vision Transformers (ViTs) to downstream visual tasks using specialized …