Defensive patches for robust recognition in the physical world

J Wang, Z Yin, P Hu, A Liu, R Tao… - Proceedings of the …, 2022 - openaccess.thecvf.com
To operate in real-world high-stakes environments, deep learning systems have to endure
noises that have been continuously thwarting their robustness. Data-end defense, which …

Improved, deterministic smoothing for l_1 certified robustness

AJ Levine, S Feizi - International Conference on Machine …, 2021 - proceedings.mlr.press
Randomized smoothing is a general technique for computing sample-dependent robustness
guarantees against adversarial attacks for deep classifiers. Prior works on randomized …

Adversarial robustness: From self-supervised pre-training to fine-tuning

T Chen, S Liu, S Chang, Y Cheng… - Proceedings of the …, 2020 - openaccess.thecvf.com
Pretrained models from self-supervision are prevalently used in fine-tuning downstream
tasks faster or for better accuracy. However, gaining robustness from pretraining is left …

Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100

S Singla, S Singla, S Feizi - arXiv preprint arXiv:2108.04062, 2021 - arxiv.org
Training convolutional neural networks (CNNs) with a strict Lipschitz constraint under the $
l_ {2} $ norm is useful for provable adversarial robustness, interpretable gradients and …

Decoupled adversarial contrastive learning for self-supervised adversarial robustness

C Zhang, K Zhang, C Zhang, A Niu, J Feng… - … on Computer Vision, 2022 - Springer
Adversarial training (AT) for robust representation learning and self-supervised learning
(SSL) for unsupervised representation learning are two active research fields. Integrating AT …

Towards better understanding of training certifiably robust models against adversarial examples

S Lee, W Lee, J Park, J Lee - Advances in Neural …, 2021 - proceedings.neurips.cc
We study the problem of training certifiably robust models against adversarial examples.
Certifiable training minimizes an upper bound on the worst-case loss over the allowed …

Data augmentation alone can improve adversarial training

L Li, M Spratling - arXiv preprint arXiv:2301.09879, 2023 - arxiv.org
Adversarial training suffers from the issue of robust overfitting, which seriously impairs its
generalization performance. Data augmentation, which is effective at preventing overfitting …

Adversarial robustness under long-tailed distribution

T Wu, Z Liu, Q Huang, Y Wang… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability
and intrinsic characteristics of deep networks. However, existing works on adversarial …

Generalizable data-free objective for crafting universal adversarial perturbations

KR Mopuri, A Ganeshan… - IEEE transactions on …, 2018 - ieeexplore.ieee.org
Machine learning models are susceptible to adversarial perturbations: small changes to
input that can cause large changes in output. It is also demonstrated that there exist input …

Achieving robustness in classification using optimal transport with hinge regularization

M Serrurier, F Mamalet… - Proceedings of the …, 2021 - openaccess.thecvf.com
Adversarial examples have pointed out Deep Neural Network's vulnerability to small local
noise. It has been shown that constraining their Lipschitz constant should enhance …