Re-thinking model inversion attacks against deep neural networks

NB Nguyen, K Chandrasegaran… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Model inversion (MI) attacks aim to infer and reconstruct private training data by
abusing access to a model. MI attacks have raised concerns about the leaking of sensitive …

Balance, imbalance, and rebalance: Understanding robust overfitting from a minimax game perspective

Y Wang, L Li, J Yang, Z Lin… - Advances in neural …, 2024 - proceedings.neurips.cc
Adversarial Training (AT) has become arguably the state-of-the-art algorithm for extracting
robust features. However, researchers recently notice that AT suffers from severe robust …

Data augmentation alone can improve adversarial training

L Li, M Spratling - arXiv preprint arXiv:2301.09879, 2023 - arxiv.org
Adversarial training suffers from the issue of robust overfitting, which seriously impairs its
generalization performance. Data augmentation, which is effective at preventing overfitting …

A survey on generative modeling with limited data, few shots, and zero shot

M Abdollahzadeh, T Malekzadeh, CTH Teo… - arXiv preprint arXiv …, 2023 - arxiv.org
In machine learning, generative modeling aims to learn to generate new data statistically
similar to the training data distribution. In this paper, we survey learning generative models …

Initialization matters for adversarial transfer learning

A Hua, J Gu, Z Xue, N Carlini… - Proceedings of the …, 2024 - openaccess.thecvf.com
With the prevalence of the Pretraining-Finetuning paradigm in transfer learning the
robustness of downstream tasks has become a critical concern. In this work we delve into …

Certified robust neural networks: Generalization and corruption resistance

A Bennouna, R Lucas… - … Conference on Machine …, 2023 - proceedings.mlr.press
Recent work have demonstrated that robustness (to" corruption") can be at odds with
generalization. Adversarial training, for instance, aims to reduce the problematic …

Adversarial training should be cast as a non-zero-sum game

A Robey, F Latorre, GJ Pappas, H Hassani… - arXiv preprint arXiv …, 2023 - arxiv.org
One prominent approach toward resolving the adversarial vulnerability of deep neural
networks is the two-player zero-sum paradigm of adversarial training, in which predictors are …

Fast propagation is better: Accelerating single-step adversarial training via sampling subnetworks

X Jia, J Li, J Gu, Y Bai, X Cao - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Adversarial training has shown promise in building robust models against adversarial
examples. A major drawback of adversarial training is the computational overhead …

Your Transferability Barrier is Fragile: Free-Lunch for Transferring the Non-Transferable Learning

Z Hong, L Shen, T Liu - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Recently non-transferable learning (NTL) was proposed to restrict models' generalization
toward the target domain (s) which serves as state-of-the-art solutions for intellectual …

Eliminating catastrophic overfitting via abnormal adversarial examples regularization

R Lin, C Yu, T Liu - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Single-step adversarial training (SSAT) has demonstrated the potential to achieve both
efficiency and robustness. However, SSAT suffers from catastrophic overfitting (CO), a …