Defense of word-level adversarial attacks via random substitution encoding

Z Wang, H Wang - … 13th International Conference, KSEM 2020, Hangzhou …, 2020 - Springer
The adversarial attacks against deep neural networks on computer vision tasks have
spawned many new technologies that help protect models from avoiding false predictions …

Searching for an effective defender: Benchmarking defense against adversarial word substitution

Z Li, J Xu, J Zeng, L Li, X Zheng, Q Zhang… - arXiv preprint arXiv …, 2021 - arxiv.org
Recent studies have shown that deep neural networks are vulnerable to intentionally crafted
adversarial examples, and various methods have been proposed to defend against …

Natural language adversarial attack and defense in word level

X Wang, H Jin, K He - 2019 - openreview.net
Up until very recently, inspired by a mass of researches on adversarial examples for
computer vision, there has been a growing interest in designing adversarial attacks for …

Bert is robust! a case against synonym-based adversarial examples in text classification

J Hauser, Z Meng, D Pascual, R Wattenhofer - arXiv preprint arXiv …, 2021 - arxiv.org
Deep Neural Networks have taken Natural Language Processing by storm. While this led to
incredible improvements across many tasks, it also initiated a new research field …

Natural language adversarial defense through synonym encoding

X Wang, J Hao, Y Yang, K He - Uncertainty in Artificial …, 2021 - proceedings.mlr.press
In the area of natural language processing, deep learning models are recently known to be
vulnerable to various types of adversarial perturbations, but relatively few works are done on …

[PDF][PDF] Defense against synonym substitution-based adversarial attacks via Dirichlet neighborhood ensemble

Y Zhou, X Zheng, CJ Hsieh, KW Chang… - Association for …, 2021 - par.nsf.gov
Although deep neural networks have achieved prominent performance on many NLP tasks,
they are vulnerable to adversarial examples. We propose Dirichlet Neighborhood Ensemble …

[HTML][HTML] Defense against adversarial attacks via textual embeddings based on semantic associative field

J Huang, L Chen - Neural Computing and Applications, 2024 - Springer
Deep neural networks are known to be vulnerable to various types of adversarial attacks,
especially word-level attacks, in the field of natural language processing. In recent years …

Rmlm: A flexible defense framework for proactively mitigating word-level adversarial attacks

Z Wang, Z Liu, X Zheng, Q Su… - Proceedings of the 61st …, 2023 - aclanthology.org
Adversarial attacks on deep neural networks keep raising security concerns in natural
language processing research. Existing defenses focus on improving the robustness of the …

Aliasing black box adversarial attack with joint self-attention distribution and confidence probability

J Liu, H Jin, G Xu, M Lin, T Wu, M Nour… - Expert Systems with …, 2023 - Elsevier
Deep neural networks (DNNs) are vulnerable to adversarial attacks, in which a small
perturbation to samples can cause misclassification. However, how to select important …

Word-level textual adversarial attacking as combinatorial optimization

Y Zang, F Qi, C Yang, Z Liu, M Zhang, Q Liu… - arXiv preprint arXiv …, 2019 - arxiv.org
Adversarial attacks are carried out to reveal the vulnerability of deep neural networks.
Textual adversarial attacking is challenging because text is discrete and a small perturbation …