Multitask learning strengthens adversarial robustness

C Mao, A Gupta, V Nitin, B Ray, S Song, J Yang… - Computer Vision–ECCV …, 2020 - Springer
Although deep networks achieve strong accuracy on a range of computer vision
benchmarks, they remain vulnerable to adversarial attacks, where imperceptible input …

When nas meets robustness: In search of robust architectures against adversarial attacks

M Guo, Y Yang, R Xu, Z Liu… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
Recent advances in adversarial attacks uncover the intrinsic vulnerability of modern deep
neural networks. Since then, extensive efforts have been devoted to enhancing the …

Architectural adversarial robustness: The case for deep pursuit

G Cazenavette, C Murdock… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Despite their unmatched performance, deep neural networks remain susceptible to targeted
attacks by nearly imperceptible levels of adversarial noise. While the underlying cause of …

Deep defense: Training dnns with improved adversarial robustness

Z Yan, Y Guo, C Zhang - Advances in Neural Information …, 2018 - proceedings.neurips.cc
Despite the efficacy on a variety of computer vision tasks, deep neural networks (DNNs) are
vulnerable to adversarial attacks, limiting their applications in security-critical systems …

Exploring the relationship between architectural design and adversarially robust generalization

A Liu, S Tang, S Liang, R Gong… - Proceedings of the …, 2023 - openaccess.thecvf.com
Adversarial training has been demonstrated to be one of the most effective remedies for
defending adversarial examples, yet it often suffers from the huge robustness generalization …

Exploring architectural ingredients of adversarially robust deep neural networks

H Huang, Y Wang, S Erfani, Q Gu… - Advances in Neural …, 2021 - proceedings.neurips.cc
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks. A range of
defense methods have been proposed to train adversarially robust DNNs, among which …

Advedge: Optimizing adversarial perturbations against interpretable deep learning

E Abdukhamidov, M Abuhamad, F Juraev… - Computational Data and …, 2021 - Springer
Abstract Deep Neural Networks (DNNs) have achieved state-of-the-art performance in
various applications. It is crucial to verify that the high accuracy prediction for a given task is …

Skip connections matter: On the transferability of adversarial examples generated with resnets

D Wu, Y Wang, ST Xia, J Bailey, X Ma - arXiv preprint arXiv:2002.05990, 2020 - arxiv.org
Skip connections are an essential component of current state-of-the-art deep neural
networks (DNNs) such as ResNet, WideResNet, DenseNet, and ResNeXt. Despite their …

Procedural noise adversarial examples for black-box attacks on deep convolutional networks

KT Co, L Muñoz-González, S de Maupeou… - Proceedings of the 2019 …, 2019 - dl.acm.org
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial
examples---perturbed inputs specifically designed to produce intentional errors in the …

Instruct2Attack: Language-Guided Semantic Adversarial Attacks

J Liu, C Wei, Y Guo, H Yu, A Yuille, S Feizi… - arXiv preprint arXiv …, 2023 - arxiv.org
We propose Instruct2Attack (I2A), a language-guided semantic attack that generates
semantically meaningful perturbations according to free-form language instructions. We …