Recent advances in instruction-following large language models (LLMs) have led to dramatic improvements in a range of NLP tasks. Unfortunately, we find that the same …
Adversarial examples expose the vulnerabilities of natural language processing (NLP) models, and can be used to evaluate and improve their robustness. Existing techniques of …
S Qiu, Q Liu, S Zhou, W Huang - Neurocomputing, 2022 - Elsevier
Recently, the adversarial attack and defense technology has made remarkable achievements and has been widely applied in the computer vision field, promoting its rapid …
M Moradi, M Samwald - arXiv preprint arXiv:2108.12237, 2021 - arxiv.org
High-performance neural language models have obtained state-of-the-art results on a wide range of Natural Language Processing (NLP) tasks. However, results for common …
Pretrained language models (PLMs) perform poorly under adversarial attacks. To improve the adversarial robustness, adversarial data augmentation (ADA) has been widely adopted …
Deep neural networks (DNNs) have achieved remarkable success in various tasks (eg, image classification, speech recognition, and natural language processing (NLP)). However …
X Han, Y Zhang, W Wang… - Security and …, 2022 - Wiley Online Library
Deep neural networks (DNNs) have been widely used in many fields due to their powerful representation learning capabilities. However, they are exposed to serious threats caused …
Textual adversarial samples play important roles in multiple subfields of NLP research, including security, evaluation, explainability, and data augmentation. However, most work …
Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years. As a countermeasure …