Boosting discriminative visual representation learning with scenario-agnostic mixup

S Li, Z Liu, Z Wang, D Wu, Z Liu, SZ Li - arXiv preprint arXiv:2111.15454, 2021 - arxiv.org
Mixup is a well-known data-dependent augmentation technique for DNNs, consisting of two
sub-tasks: mixup generation and classification. However, the recent dominant online training …

Openmixup: Open mixup toolbox and benchmark for visual representation learning

S Li, Z Wang, Z Liu, D Wu, SZ Li - arXiv preprint arXiv:2209.04851, 2022 - arxiv.org
With the remarkable progress of deep neural networks in computer vision, data mixing
augmentation techniques are widely studied to alleviate problems of degraded …

Global-local self-distillation for visual representation learning

T Lebailly, T Tuytelaars - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
The downstream accuracy of self-supervised methods is tightly linked to the proxy task
solved during training and the quality of the gradients extracted from it. Richer and more …

Unsupervised visual representation learning by online constrained k-means

Q Qian, Y Xu, J Hu, H Li, R Jin - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Cluster discrimination is an effective pretext task for unsupervised representation learning,
which often consists of two phases: clustering and discrimination. Clustering is to assign …

X-learner: Learning cross sources and tasks for universal visual representation

Y He, G Huang, S Chen, J Teng, K Wang, Z Yin… - … on Computer Vision, 2022 - Springer
In computer vision, pre-training models based on large-scale supervised learning have
been proven effective over the past few years. However, existing works mostly focus on …

Unsupervised visual representation learning via dual-level progressive similar instance selection

H Fan, P Liu, M Xu, Y Yang - Ieee transactions on cybernetics, 2021 - ieeexplore.ieee.org
The superiority of deeply learned representations relies on large-scale labeled datasets.
However, annotating data are usually expensive or even infeasible in some scenarios. To …

Automix: Unveiling the power of mixup for stronger classifiers

Z Liu, S Li, D Wu, Z Liu, Z Chen, L Wu, SZ Li - European Conference on …, 2022 - Springer
Data mixing augmentation have proved to be effective for improving the generalization
ability of deep neural networks. While early methods mix samples by hand-crafted policies …

Jigsaw clustering for unsupervised visual representation learning

P Chen, S Liu, J Jia - … of the IEEE/CVF conference on …, 2021 - openaccess.thecvf.com
Unsupervised representation learning with contrastive learning achieves great success
recently. However, these methods have to duplicate each training batch to construct …

Scaling and benchmarking self-supervised visual representation learning

P Goyal, D Mahajan, A Gupta… - Proceedings of the ieee …, 2019 - openaccess.thecvf.com
Self-supervised learning aims to learn representations from the data itself without explicit
manual supervision. Existing efforts ignore a crucial aspect of self-supervised learning-the …

LSPT: Long-term Spatial Prompt Tuning for Visual Representation Learning

S Mo, Y Wang, X Luo, D Li - arXiv preprint arXiv:2402.17406, 2024 - arxiv.org
Visual Prompt Tuning (VPT) techniques have gained prominence for their capacity to adapt
pre-trained Vision Transformers (ViTs) to downstream visual tasks using specialized …