Machine vision therapy: Multimodal large language models can enhance visual robustness via denoising in-context learning

Z Huang, C Liu, Y Dong, H Su, S Zheng… - Forty-first International …, 2023 - openreview.net
Although pre-trained models such as Contrastive Language-Image Pre-Training (CLIP)
show impressive generalization results, their robustness is still limited under Out-of …

Winning prize comes from losing tickets: Improve invariant learning by exploring variant parameters for out-of-distribution generalization

Z Huang, M Li, L Shen, J Yu, C Gong, B Han… - International Journal of …, 2024 - Springer
Abstract Out-of-Distribution (OOD) Generalization aims to learn robust models that
generalize well to various environments without fitting to distribution-specific features …

On the Over-Memorization During Natural, Robust and Catastrophic Overfitting

R Lin, C Yu, B Han, T Liu - arXiv preprint arXiv:2310.08847, 2023 - arxiv.org
Overfitting negatively impacts the generalization ability of deep neural networks (DNNs) in
both natural and adversarial training. Existing methods struggle to consistently address …

Fabricating customizable 3-D printed pressure sensors by tuning infill characteristics

J Yu, PB Perera, RV Perera, MM Valashani… - IEEE Sensors …, 2024 - ieeexplore.ieee.org
We present a novel method for fabricating customizable pressure sensors by tuning the infill
characteristics of flexible 3-D prints, addressing the demand for precise sensing solutions …

Adversarial Training: A Survey

M Zhao, L Zhang, J Ye, H Lu, B Yin, X Wang - arXiv preprint arXiv …, 2024 - arxiv.org
Adversarial training (AT) refers to integrating adversarial examples--inputs altered with
imperceptible perturbations that can significantly impact model predictions--into the training …

Layer-Aware Analysis of Catastrophic Overfitting: Revealing the Pseudo-Robust Shortcut Dependency

R Lin, C Yu, B Han, H Su, T Liu - arXiv preprint arXiv:2405.16262, 2024 - arxiv.org
Catastrophic overfitting (CO) presents a significant challenge in single-step adversarial
training (AT), manifesting as highly distorted deep neural networks (DNNs) that are …

Parameter-constrained adversarial training

Z Deng, Y Wei - 2023 2nd International Conference on Cloud …, 2023 - ieeexplore.ieee.org
Adversarial training is a simple and effective approach to defend against adversarial attacks.
However, most adversarial training methods face expensive time and computational costs …

Exploring Robust Overfitting in Adversarial Training: The Formation, Progression, and Mechanism

C Yu - 2024 - ses.library.usyd.edu.au
Deep neural networks (DNNs) have achieved remarkable success across various fields but
remain highly vulnerable to adversarial attacks, prompting the development of numerous …

RED: Efficiently Boosting Ensemble Robustness via Random Sampling Inference

H Gong, M Dong, S Ma, C Xu - openreview.net
Despite the remarkable achievements of Deep Neural Networks (DNNs) in handling diverse
tasks, these high-performing models remain susceptible to adversarial attacks …

[PDF][PDF] A Survey on Image Perturbations for Model Robustness: Attacks and Defenses

PF Zhang, Z Huang - researchgate.net
The widespread adoption of deep neural networks (DNNs) has raised significant concerns
about their robustness, particularly in real-world environments characterized by inherent …