Adversarial attack and defense technologies in natural language processing: A survey

S Qiu, Q Liu, S Zhou, W Huang - Neurocomputing, 2022 - Elsevier
Recently, the adversarial attack and defense technology has made remarkable
achievements and has been widely applied in the computer vision field, promoting its rapid …

Robustness of llms to perturbations in text

A Singh, N Singh, S Vatsal - arXiv preprint arXiv:2407.08989, 2024 - arxiv.org
Having a clean dataset has been the foundational assumption of most natural language
processing (NLP) systems. However, properly written text is rarely found in real-world …

An attention-based LSTM model for stock price trend prediction using limit order books

Y Li, L Li, X Zhao, T Ma, Y Zou… - Journal of Physics …, 2020 - iopscience.iop.org
Stock price trend prediction has been a hot issue in the financial field, which has been paid
much attention by both academia and industry. It is a challenging task due to the non …

An unsupervised cross-modal hashing method robust to noisy training image-text correspondences in remote sensing

G Mikriukov, M Ravanbakhsh… - 2022 IEEE International …, 2022 - ieeexplore.ieee.org
The development of accurate and scalable cross-modal image-text retrieval methods, where
queries from one modality (eg, text) can be matched to archive entries from another (eg …

Identifying adversarial attacks on text classifiers

Z Xie, J Brophy, A Noack, W You, K Asthana… - arXiv preprint arXiv …, 2022 - arxiv.org
The landscape of adversarial attacks against text classifiers continues to grow, with new
attacks developed every year and many of them available in standard toolkits, such as …

Distinguishing non-natural from natural adversarial samples for more robust pre-trained language model

J Wang, R Bao, Z Zhang, H Zhao - arXiv preprint arXiv:2203.11199, 2022 - arxiv.org
Recently, the problem of robustness of pre-trained language models (PrLMs) has received
increasing research interest. Latest studies on adversarial attacks achieve high attack …

Bypassing DARCY Defense: Indistinguishable Universal Adversarial Triggers

Z Peng, Y He, J Ni, B Niu - arXiv preprint arXiv:2409.03183, 2024 - arxiv.org
Neural networks (NN) classification models for Natural Language Processing (NLP) are
vulnerable to the Universal Adversarial Triggers (UAT) attack that triggers a model to …

MPAT: Building Robust Deep Neural Networks against Textual Adversarial Attacks

F Zhang, H Zhou, S Li, H Wang - arXiv preprint arXiv:2402.18792, 2024 - arxiv.org
Deep neural networks have been proven to be vulnerable to adversarial examples and
various methods have been proposed to defend against adversarial attacks for natural …

Finite-context Indexing of Restricted Output Space for NLP Models Facing Noisy Input

M Nguyen, NF Chen - arXiv preprint arXiv:2310.14110, 2023 - arxiv.org
NLP models excel on tasks with clean inputs, but are less accurate with noisy inputs. In
particular, character-level noise such as human-written typos and adversarially-engineered …

Training Dynamic based data filtering may not work for NLP datasets

A Talukdar, M Dagar, P Gupta, V Menon - arXiv preprint arXiv:2109.09191, 2021 - arxiv.org
The recent increase in dataset size has brought about significant advances in natural
language understanding. These large datasets are usually collected through automation …