Mitigating adversarial attacks for deep neural networks by input deformation and augmentation

P Qiu, Q Wang, D Wang, Y Lyu, Z Lu… - 2020 25th Asia and …, 2020 - ieeexplore.ieee.org
Typical Deep Neural Networks (DNN) are susceptible to adversarial attacks that add
malicious perturbations to input to mislead the DNN model. Most of the state-of-theart …

Defensive dropout for hardening deep neural networks under adversarial attacks

S Wang, X Wang, P Zhao, W Wen… - 2018 IEEE/ACM …, 2018 - ieeexplore.ieee.org
Deep neural networks (DNNs) are known vulnerable to adversarial attacks. That is,
adversarial examples, obtained by adding delicately crafted distortions onto original legal …

Defending dnn adversarial attacks with pruning and logits augmentation

S Wang, X Wang, S Ye, P Zhao… - 2018 IEEE Global …, 2018 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been shown to be powerful models and perform
extremely well on many complicated artificial intelligent tasks. However, recent research …

Admm attack: an enhanced adversarial attack for deep neural networks with undetectable distortions

P Zhao, K Xu, S Liu, Y Wang, X Lin - Proceedings of the 24th Asia and …, 2019 - dl.acm.org
Many recent studies demonstrate that state-of-the-art Deep neural networks (DNNs) might
be easily fooled by adversarial examples, generated by adding carefully crafted and visually …

Defending DNN adversarial attacks with pruning and logits augmentation

S Ye, S Wang, X Wang, B Yuan, W Wen, X Lin - 2018 - openreview.net
Deep neural networks (DNNs) have been shown to be powerful models and perform
extremely well on many complicated artificial intelligent tasks. However, recent research …

Random directional attack for fooling deep neural networks

W Luo, C Wu, N Zhou, L Ni - arXiv preprint arXiv:1908.02658, 2019 - arxiv.org
Deep neural networks (DNNs) have been widely used in many fields such as images
processing, speech recognition; however, they are vulnerable to adversarial examples, and …

Model compression hardens deep neural networks: A new perspective to prevent adversarial attacks

Q Liu, W Wen - IEEE Transactions on Neural Networks and …, 2021 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been demonstrating phenomenal success in many real-
world applications. However, recent works show that DNN's decision can be easily …

Boosting adversarial transferability via gradient relevance attack

H Zhu, Y Ren, X Sui, L Yang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Plentiful adversarial attack researches have revealed the fragility of deep neural networks
(DNNs), where the imperceptible perturbations can cause drastic changes in the output …

Fencebox: A platform for defeating adversarial examples with data augmentation techniques

H Qiu, Y Zeng, T Zhang, Y Jiang, M Qiu - arXiv preprint arXiv:2012.01701, 2020 - arxiv.org
It is extensively studied that Deep Neural Networks (DNNs) are vulnerable to Adversarial
Examples (AEs). With more and more advanced adversarial attack methods have been …

Gradient shielding: towards understanding vulnerability of deep neural networks

Z Gu, W Hu, C Zhang, H Lu, L Yin… - IEEE transactions on …, 2020 - ieeexplore.ieee.org
Deep neural networks (DNNs) have been widely adopted but they are vulnerable to
intentionally crafted adversarial examples. Various attack methods against DNNs have been …