Early stopping against label noise without validation data

S Yuan, L Feng, T Liu - The Twelfth International Conference on …, 2024 - openreview.net
Early stopping methods in deep learning face the challenge of balancing the volume of
training and validation data, especially in the presence of label noise. Concretely, sparing …

Machine vision therapy: Multimodal large language models can enhance visual robustness via denoising in-context learning

Z Huang, C Liu, Y Dong, H Su, S Zheng… - Forty-first International …, 2023 - openreview.net
Although pre-trained models such as Contrastive Language-Image Pre-Training (CLIP)
show impressive generalization results, their robustness is still limited under Out-of …

Winning prize comes from losing tickets: Improve invariant learning by exploring variant parameters for out-of-distribution generalization

Z Huang, M Li, L Shen, J Yu, C Gong, B Han… - International Journal of …, 2024 - Springer
Abstract Out-of-Distribution (OOD) Generalization aims to learn robust models that
generalize well to various environments without fitting to distribution-specific features …

Adversarial Training: A Survey

M Zhao, L Zhang, J Ye, H Lu, B Yin, X Wang - arXiv preprint arXiv …, 2024 - arxiv.org
Adversarial training (AT) refers to integrating adversarial examples--inputs altered with
imperceptible perturbations that can significantly impact model predictions--into the training …

Improving Fast Adversarial Training Paradigm: An Example Taxonomy Perspective

J Gui, C Jiang, M Dong, K Tong, X Shi, YY Tang… - arXiv preprint arXiv …, 2024 - arxiv.org
While adversarial training is an effective defense method against adversarial attacks, it
notably increases the training cost. To this end, fast adversarial training (FAT) is presented …

Improving Fast Adversarial Training via Self-Knowledge Guidance

C Jiang, J Wang, M Dong, J Gui, X Shi, Y Cao… - arXiv preprint arXiv …, 2024 - arxiv.org
Adversarial training has achieved remarkable advancements in defending against
adversarial attacks. Among them, fast adversarial training (FAT) is gaining attention for its …

Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency

R Lin, C Yu, B Han, H Su, T Liu - arXiv preprint arXiv:2405.16262, 2024 - arxiv.org
Catastrophic overfitting (CO) presents a significant challenge in single-step adversarial
training (AT), manifesting as highly distorted deep neural networks (DNNs) that are …

Exploring Robust Overfitting in Adversarial Training: The Formation, Progression, and Mechanism

C Yu - 2024 - ses.library.usyd.edu.au
Deep neural networks (DNNs) have achieved remarkable success across various fields but
remain highly vulnerable to adversarial attacks, prompting the development of numerous …

[PDF][PDF] A Survey on Image Perturbations for Model Robustness: Attacks and Defenses

PF Zhang, Z Huang - researchgate.net
The widespread adoption of deep neural networks (DNNs) has raised significant concerns
about their robustness, particularly in real-world environments characterized by inherent …