Ptolemy: Architecture support for robust deep learning

Y Gan, Y Qiu, J Leng, M Guo… - 2020 53rd Annual IEEE …, 2020 - ieeexplore.ieee.org
Deep learning is vulnerable to adversarial attacks, where carefully-crafted input
perturbations could mislead a well-trained Deep Neural Network (DNN) to produce incorrect …

Gradient similarity: An explainable approach to detect adversarial attacks against deep learning

J Dhaliwal, S Shintre - arXiv preprint arXiv:1806.10707, 2018 - arxiv.org
Deep neural networks are susceptible to small-but-specific adversarial perturbations
capable of deceiving the network. This vulnerability can lead to potentially harmful …

Massif: Interactive interpretation of adversarial attacks on deep learning

N Das, H Park, ZJ Wang, F Hohman… - Extended Abstracts of …, 2020 - dl.acm.org
Deep neural networks (DNNs) are increasingly powering high-stakes applications such as
autonomous cars and healthcare; however, DNNs are often treated as" black boxes" in such …

Dnnguard: An elastic heterogeneous dnn accelerator architecture against adversarial attacks

X Wang, R Hou, B Zhao, F Yuan, J Zhang… - Proceedings of the …, 2020 - dl.acm.org
Recent studies show that Deep Neural Networks (DNN) are vulnerable to adversarial
samples that are generated by perturbing correctly classified inputs to cause the …

Simple black-box adversarial perturbations for deep networks

N Narodytska, SP Kasiviswanathan - arXiv preprint arXiv:1612.06299, 2016 - arxiv.org
Deep neural networks are powerful and popular learning models that achieve state-of-the-
art pattern recognition performance on many computer vision, speech, and language …

A survey on the vulnerability of deep neural networks against adversarial attacks

A Michel, SK Jha, R Ewetz - Progress in Artificial Intelligence, 2022 - Springer
With the advancement of accelerated hardware in recent years, there has been a surge in
the development and application of intelligent systems. Deep learning systems, in particular …

Bluff: Interactively deciphering adversarial attacks on deep neural networks

N Das, H Park, ZJ Wang, F Hohman… - 2020 IEEE …, 2020 - ieeexplore.ieee.org
Deep neural networks (DNNs) are now commonly used in many domains. However, they
are vulnerable to adversarial attacks: carefully-crafted perturbations on data inputs that can …

On adversarial robustness: A neural architecture search perspective

C Devaguptapu, D Agarwal, G Mittal… - Proceedings of the …, 2021 - openaccess.thecvf.com
Adversarial robustness of deep learning models has gained much traction in the last few
years. Various attacks and defenses are proposed to improve the adversarial robustness of …

DLA: dense-layer-analysis for adversarial example detection

P Sperl, CY Kao, P Chen, X Lei… - 2020 IEEE European …, 2020 - ieeexplore.ieee.org
In recent years Deep Neural Networks (DNNs) have achieved remarkable results and even
showed superhuman capabilities in a broad range of domains. This led people to trust in …

Gotta catch'em all: Using honeypots to catch adversarial attacks on neural networks

S Shan, E Wenger, B Wang, B Li, H Zheng… - Proceedings of the 2020 …, 2020 - dl.acm.org
Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous
efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to …