Recent literature has shown that LLMs are vulnerable to backdoor attacks, where malicious attackers inject a secret token sequence (ie, trigger) into training prompts and enforce their …
Deep neural networks (DNNs) have demonstrated effectiveness in various fields. However, DNNs are vulnerable to backdoor attacks, which inject a unique pattern, called trigger, into …
X Zhang, S Liang, C Li - International Conference on Pattern Recognition, 2024 - Springer
Object detection models, widely used in security-critical applications, are vulnerable to backdoor attacks that cause targeted misclassifications when triggered by specific patterns …
Y Liu, Y Wang, J Jia - arXiv preprint arXiv:2501.04108, 2025 - arxiv.org
An image encoder pre-trained by self-supervised learning can be used as a general- purpose feature extractor to build downstream classifiers for various downstream tasks …
Learning utility, or reward, models from pairwise comparisons is a fundamental component in a number of application domains. These approaches inherently entail collecting …
The exploration of backdoor vulnerabilities in object detectors, particularly in real-world scenarios, remains limited. A significant challenge lies in the absence of a natural physical …