Adversarial attacks and defences: A survey

A Chakraborty, M Alam, V Dey… - arXiv preprint arXiv …, 2018 - arxiv.org
Deep learning has emerged as a strong and efficient framework that can be applied to a
broad spectrum of complex learning problems which were difficult to solve using the …

Deepsec: A uniform platform for security analysis of deep learning model

X Ling, S Ji, J Zou, J Wang, C Wu, B Li… - 2019 IEEE Symposium …, 2019 - ieeexplore.ieee.org
Deep learning (DL) models are inherently vulnerable to adversarial examples–maliciously
crafted inputs to trigger target DL models to misbehave–which significantly hinders the …

Lance: A comprehensive and lightweight cnn defense methodology against physical adversarial attacks on embedded multimedia applications

Z Xu, F Yu, X Chen - 2020 25th Asia and South Pacific Design …, 2020 - ieeexplore.ieee.org
Recently, adversarial attacks can be applied to the physical world, causing practical issues
to various Convolutional Neural Networks (CNNs) powered applications. Most existing …

Raid: Randomized adversarial-input detection for neural networks

HF Eniser, M Christakis, V Wüstholz - arXiv preprint arXiv:2002.02776, 2020 - arxiv.org
In recent years, neural networks have become the default choice for image classification and
many other learning tasks, even though they are vulnerable to so-called adversarial attacks …

Adversarial sample detection for deep neural network through model mutation testing

J Wang, G Dong, J Sun, X Wang… - 2019 IEEE/ACM 41st …, 2019 - ieeexplore.ieee.org
Deep neural networks (DNN) have been shown to be useful in a wide range of applications.
However, they are also known to be vulnerable to adversarial samples. By transforming a …

Exploring adversarial attacks on neural networks: An explainable approach

J Renkhoff, W Tan, A Velasquez… - 2022 IEEE …, 2022 - ieeexplore.ieee.org
Deep Learning (DL) is being applied in various domains, especially in safety-critical
applications such as autonomous driving. Consequently, it is of great significance to ensure …

Stealthy attack on algorithmic-protected dnns via smart bit flipping

B Ghavami, S Movi, Z Fang… - 2022 23rd International …, 2022 - ieeexplore.ieee.org
Recently, deep neural networks (DNNs) have been deployed in safety-critical systems such
as autonomous vehicles and medical devices. Shortly after that, the vulnerability of DNNs …

LAFIT: Efficient and Reliable Evaluation of Adversarial Defenses With Latent Features

Y Yu, X Gao, CZ Xu - IEEE Transactions on Pattern Analysis …, 2023 - ieeexplore.ieee.org
Deep convolutional neural networks (CNNs) can be easily tricked to give incorrect outputs
by adding tiny perturbations to the input that are imperceptible to humans. This makes them …

Fannet: Formal analysis of noise tolerance, training bias and input sensitivity in neural networks

M Naseer, MF Minhas, F Khalid… - … design, automation & …, 2020 - ieeexplore.ieee.org
With a constant improvement in the network architectures and training methodologies,
Neural Networks (NNs) are increasingly being deployed in real-world Machine Learning …

What does it mean to learn in deep networks? And, how does one detect adversarial attacks?

CA Corneanu, M Madadi, S Escalera… - Proceedings of the …, 2019 - openaccess.thecvf.com
The flexibility and high-accuracy of Deep Neural Networks (DNNs) has transformed
computer vision. But, the fact that we do not know when a specific DNN will work and when it …