Magnet: a two-pronged defense against adversarial examples

D Meng, H Chen - Proceedings of the 2017 ACM SIGSAC conference on …, 2017 - dl.acm.org
Deep learning has shown impressive performance on hard perceptual problems. However,
researchers found deep learning systems to be vulnerable to small, specially crafted …

Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients

A Ross, F Doshi-Velez - Proceedings of the AAAI conference on …, 2018 - ojs.aaai.org
Deep neural networks have proven remarkably effective at solving many classification
problems, but have been criticized recently for two major weaknesses: the reasons behind …

Deep neural rejection against adversarial examples

A Sotgiu, A Demontis, M Melis, B Biggio… - EURASIP Journal on …, 2020 - Springer
Despite the impressive performances reported by deep neural networks in different
application domains, they remain largely vulnerable to adversarial examples, ie, input …

Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity

S Zhou, C Liu, D Ye, T Zhu, W Zhou, PS Yu - ACM Computing Surveys, 2022 - dl.acm.org
The outstanding performance of deep neural networks has promoted deep learning
applications in a broad set of domains. However, the potential risks caused by adversarial …

Adversarial defense via learning to generate diverse attacks

Y Jang, T Zhao, S Hong, H Lee - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
With the remarkable success of deep learning, Deep Neural Networks (DNNs) have been
applied as dominant tools to various machine learning domains. Despite this success …

Gotta catch'em all: Using honeypots to catch adversarial attacks on neural networks

S Shan, E Wenger, B Wang, B Li, H Zheng… - Proceedings of the 2020 …, 2020 - dl.acm.org
Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous
efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to …

Deepcloak: Masking deep neural network models for robustness against adversarial samples

J Gao, B Wang, Z Lin, W Xu, Y Qi - arXiv preprint arXiv:1702.06763, 2017 - arxiv.org
Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial
samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such …

Gat: Generative adversarial training for adversarial example detection and robust classification

X Yin, S Kolouri, GK Rohde - arXiv preprint arXiv:1905.11475, 2019 - arxiv.org
The vulnerabilities of deep neural networks against adversarial examples have become a
significant concern for deploying these models in sensitive domains. Devising a definitive …

[PDF][PDF] Simple Black-Box Adversarial Attacks on Deep Neural Networks.

N Narodytska, SP Kasiviswanathan - CVPR Workshops, 2017 - openaccess.thecvf.com
Deep neural networks are powerful and popular learning models that achieve state-of-the-
art pattern recognition performance on many computer vision, speech, and language …

Generating adversarial examples with adversarial networks

C Xiao, B Li, JY Zhu, W He, M Liu, D Song - arXiv preprint arXiv …, 2018 - arxiv.org
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples
resulting from adding small-magnitude perturbations to inputs. Such adversarial examples …