L Yuan, Y Zhang, Y Chen, W Wei - arXiv preprint arXiv:2110.15317, 2021 - arxiv.org
Despite recent success on various tasks, deep learning techniques still perform poorly on adversarial examples with small perturbations. While optimization-based methods for …
Efficiently building an adversarial attacker for natural language processing (NLP) tasks is a real challenge. Firstly, as the sentence space is discrete, it is difficult to make small …
G Zeng, F Qi, Q Zhou, T Zhang, Z Ma, B Hou… - arXiv preprint arXiv …, 2020 - arxiv.org
Textual adversarial attacking has received wide and increasing attention in recent years. Various attack models have been proposed, which are enormously distinct and …
Y Chen, J Su, W Wei - arXiv preprint arXiv:2109.04367, 2021 - arxiv.org
Recently, the textual adversarial attack models become increasingly popular due to their successful in estimating the robustness of NLP models. However, existing works have …
Abstract Machine learning algorithms are often vulnerable to adversarial examples that have imperceptible alterations from the original counterparts but can fool the state-of-the-art …
We study an important and challenging task of attacking natural language processing models in a hard label black box setting. We propose a decision-based attack strategy that …
Large-scale pre-trained language models have achieved tremendous success across a wide range of natural language understanding (NLU) tasks, even surpassing human …
S Qiu, Q Liu, S Zhou, W Huang - Neurocomputing, 2022 - Elsevier
Recently, the adversarial attack and defense technology has made remarkable achievements and has been widely applied in the computer vision field, promoting its rapid …
Adversarial attacks against natural language processing systems, which perform seemingly innocuous modifications to inputs, can induce arbitrary mistakes to the target models …