Libre: A practical bayesian approach to adversarial detection

Z Deng, X Yang, S Xu, H Su… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Despite their appealing flexibility, deep neural networks (DNNs) are vulnerable against
adversarial examples. Various adversarial defense strategies have been proposed to …

Deepcloak: Masking deep neural network models for robustness against adversarial samples

J Gao, B Wang, Z Lin, W Xu, Y Qi - arXiv preprint arXiv:1702.06763, 2017 - arxiv.org
Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial
samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such …

Unmask: Adversarial detection and defense through robust feature alignment

S Freitas, ST Chen, ZJ Wang… - 2020 IEEE International …, 2020 - ieeexplore.ieee.org
Recent research has demonstrated that deep learning architectures are vulnerable to
adversarial attacks, high-lighting the vital need for defensive techniques to detect and …

Deepfense: Online accelerated defense against adversarial deep learning

BD Rouhani, M Samragh, M Javaheripi… - 2018 IEEE/ACM …, 2018 - ieeexplore.ieee.org
Recent advances in adversarial Deep Learning (DL) have opened up a largely unexplored
surface for malicious attacks jeopardizing the integrity of autonomous DL systems. With the …

Emshepherd: Detecting adversarial samples via side-channel leakage

R Ding, C Gongye, S Wang, AA Ding… - Proceedings of the 2023 …, 2023 - dl.acm.org
Deep Neural Networks (DNN) are vulnerable to adversarial perturbations—small changes
crafted deliberately on the input to mislead the model for wrong predictions. Adversarial …

Ai-guardian: Defeating adversarial attacks using backdoors

H Zhu, S Zhang, K Chen - 2023 IEEE Symposium on Security …, 2023 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been widely used in many fields due to their
increasingly high accuracy. However, they are also vulnerable to adversarial attacks, posing …

DNDNet: Reconfiguring CNN for adversarial robustness

A Goel, A Agarwal, M Vatsa… - Proceedings of the …, 2020 - openaccess.thecvf.com
Several successful adversarial attacks have demonstrated the vulnerabilities of deep
learning algorithms. These attacks are detrimental in building deep learning based …

[PDF][PDF] Nic: Detecting adversarial samples with neural network invariant checking

S Ma, Y Liu - Proceedings of the 26th network and distributed system …, 2019 - par.nsf.gov
Deep Neural Networks (DNN) are vulnerable to adversarial samples that are generated by
perturbing correctly classified inputs to cause DNN models to misbehave (eg …

[PDF][PDF] Simple Black-Box Adversarial Attacks on Deep Neural Networks.

N Narodytska, SP Kasiviswanathan - CVPR Workshops, 2017 - openaccess.thecvf.com
Deep neural networks are powerful and popular learning models that achieve state-of-the-
art pattern recognition performance on many computer vision, speech, and language …

Jujutsu: A two-stage defense against adversarial patch attacks on deep neural networks

Z Chen, P Dash, K Pattabiraman - Proceedings of the 2023 ACM Asia …, 2023 - dl.acm.org
Adversarial patch attacks create adversarial examples by injecting arbitrary distortions within
a bounded region of the input to fool deep neural networks (DNNs). These attacks are robust …