Interpreting adversarial examples in deep learning: A review

S Han, C Lin, C Shen, Q Wang, X Guan - ACM Computing Surveys, 2023 - dl.acm.org
Deep learning technology is increasingly being applied in safety-critical scenarios but has
recently been found to be susceptible to imperceptible adversarial perturbations. This raises …

Adversarial machine learning on social network: A survey

S Guo, X Li, Z Mu - Frontiers in Physics, 2021 - frontiersin.org
In recent years, machine learning technology has made great improvements in social
networks applications such as social network recommendation systems, sentiment analysis …

ADS-detector: An attention-based dual stream adversarial example detection method

S Guo, X Li, P Zhu, Z Mu - Knowledge-Based Systems, 2023 - Elsevier
Adversarial attacks seriously threaten the security of machine learning models. Thus,
detecting adversarial examples has become an important and interesting research topic …

Evading adversarial example detection defenses with orthogonal projected gradient descent

O Bryniarski, N Hingun, P Pachuca, V Wang… - arXiv preprint arXiv …, 2021 - arxiv.org
Evading adversarial example detection defenses requires finding adversarial examples that
must simultaneously (a) be misclassified by the model and (b) be detected as non …

Defenses in adversarial machine learning: A survey

B Wu, S Wei, M Zhu, M Zheng, Z Zhu, M Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Adversarial phenomenon has been widely observed in machine learning (ML) systems,
especially in those using deep neural networks, describing that ML systems may produce …

Privacy-preserving universal adversarial defense for black-box models

Q Li, C Wu, J Chen, Z Zhang, K He, R Du… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep neural networks (DNNs) are increasingly used in critical applications such as identity
authentication and autonomous driving, where robustness against adversarial attacks is …

Detecting adversarial perturbations in multi-task perception

M Klingner, VR Kumar, S Yogamani… - 2022 IEEE/RSJ …, 2022 - ieeexplore.ieee.org
While deep neural networks (DNNs) achieve impressive performance on environment
perception tasks, their sensitivity to adversarial perturbations limits their use in practical …

Density-based reliable and robust explainer for counterfactual explanation

S Zhang, X Chen, S Wen, Z Li - Expert Systems with Applications, 2023 - Elsevier
As an essential post-hoc explanatory method, counterfactual explanation enables people to
understand and react to machine learning models. Works on counterfactual explanation …

Random and adversarial bit error robustness: Energy-efficient and secure DNN accelerators

D Stutz, N Chandramoorthy, M Hein… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Deep neural network (DNN) accelerators received considerable attention in recent years
due to the potential to save energy compared to mainstream hardware. Low-voltage …

Detection of adversarial attacks via disentangling natural images and perturbations

Y Qing, T Bai, Z Liu, P Moulin… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The vulnerability of deep neural networks against adversarial attacks, ie, imperceptible
adversarial perturbations can easily give rise to wrong predictions, poses a huge threat to …