Textual backdoor attacks are a kind of practical threat to NLP systems. By injecting a backdoor in the training phase, the adversary could control model predictions via predefined …
Backdoors can be injected to NLP models such that they misbehave when the trigger words or sentences appear in an input sample. Detecting such backdoors given only a subject …
X Qi, T Xie, R Pan, J Zhu, Y Yang… - Proceedings of the …, 2022 - openaccess.thecvf.com
One major goal of the AI security community is to securely and reliably produce and deploy deep learning models for real-world applications. To this end, data poisoning based …
Modern language models are vulnerable to backdoor attacks. An injected malicious token sequence (ie, a trigger) can cause the compromised model to misbehave, raising security …
X Han, G Xu, Y Zhou, X Yang, J Li… - Proceedings of the 30th …, 2022 - dl.acm.org
Modern autonomous vehicles adopt state-of-the-art DNN models to interpret the sensor data and perceive the environment. However, DNN models are vulnerable to different types of …
X Sheng, Z Han, P Li, X Chang - 2022 IEEE 22nd International …, 2022 - ieeexplore.ieee.org
Deep learning is becoming increasingly popular in real-life applications, especially in natural language processing (NLP). Users often choose training outsourcing or adopt third …
S Yuan, H Zhao, S Zhao, J Leng, Y Liang… - arXiv preprint arXiv …, 2022 - arxiv.org
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm. Researchers have achieved various …
Class incremental learning from a pre-trained DNN model is gaining lots of popularity. Unfortunately, the pre-trained model also introduces a new attack vector, which enables an …