Survey: Image mixing and deleting for data augmentation

H Naveed, S Anwar, M Hayat, K Javed… - Engineering Applications of …, 2024 - Elsevier
Neural networks are prone to overfitting and memorizing data patterns. To avoid over-fitting
and enhance their generalization and performance, various methods have been suggested …

Hard negative mixing for contrastive learning

Y Kalantidis, MB Sariyildiz, N Pion… - Advances in neural …, 2020 - proceedings.neurips.cc
Contrastive learning has become a key component of self-supervised learning approaches
for computer vision. By learning to embed two augmented versions of the same image close …

Efficiently teaching an effective dense retriever with balanced topic aware sampling

S Hofstätter, SC Lin, JH Yang, J Lin… - Proceedings of the 44th …, 2021 - dl.acm.org
A vital step towards the widespread adoption of neural retrieval models is their resource
efficiency throughout the training, indexing and query workflows. The neural IR community …

Rethinking federated learning with domain shift: A prototype view

W Huang, M Ye, Z Shi, H Li, B Du - 2023 IEEE/CVF Conference …, 2023 - ieeexplore.ieee.org
Federated learning shows a bright promise as a privacy-preserving collaborative learning
technique. However, prevalent solutions mainly focus on all private data sampled from the …

Partmix: Regularization strategy to learn part discovery for visible-infrared person re-identification

M Kim, S Kim, J Park, S Park… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Modern data augmentation using a mixture-based technique can regularize the models from
overfitting to the training data in various computer vision applications, but a proper data …

Crafting better contrastive views for siamese representation learning

X Peng, K Wang, Z Zhu, M Wang… - Proceedings of the …, 2022 - openaccess.thecvf.com
Recent self-supervised contrastive learning methods greatly benefit from the Siamese
structure that aims at minimizing distances between positive pairs. For high performance …

Transmix: Attend to mix for vision transformers

JN Chen, S Sun, J He, PHS Torr… - Proceedings of the …, 2022 - openaccess.thecvf.com
Mixup-based augmentation has been found to be effective for generalizing models during
training, especially for Vision Transformers (ViTs) since they can easily overfit. However …

Mixed autoencoder for self-supervised visual representation learning

K Chen, Z Liu, L Hong, H Xu, Z Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Masked Autoencoder (MAE) has demonstrated superior performance on various vision tasks
via randomly masking image patches and reconstruction. However, effective data …

Overcoming data limitations: a few-shot specific emitter identification method using self-supervised learning and adversarial augmentation

C Liu, X Fu, Y Wang, L Guo, Y Liu, Y Lin… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
Specific emitter identification (SEI) based on radio frequency fingerprinting (RFF) is a
physical layer authentication method in the field of wireless network security. RFFs are …

Hallucination improves the performance of unsupervised visual representation learning

J Wu, J Hobbs, N Hovakimyan - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Contrastive learning models based on Siamese structure have demonstrated remarkable
performance in self-supervised learning. Such a success of contrastive learning relies on …