How deep learning sees the world: A survey on adversarial attacks & defenses

JC Costa, T Roxo, H Proença, PRM Inácio - IEEE Access, 2024 - ieeexplore.ieee.org
Deep Learning is currently used to perform multiple tasks, such as object recognition, face
recognition, and natural language processing. However, Deep Neural Networks (DNNs) are …

Backdoor learning for nlp: Recent advances, challenges, and future research directions

M Omar - arXiv preprint arXiv:2302.06801, 2023 - arxiv.org
Although backdoor learning is an active research topic in the NLP domain, the literature
lacks studies that systematically categorize and summarize backdoor attacks and defenses …

Adversarial patch attacks and defences in vision-based tasks: A survey

A Sharma, Y Bian, P Munz, A Narayan - arXiv preprint arXiv:2206.08304, 2022 - arxiv.org
Adversarial attacks in deep learning models, especially for safety-critical systems, are
gaining more and more attention in recent years, due to the lack of trust in the security and …

Certified defences against adversarial patch attacks on semantic segmentation

M Yatsura, K Sakmann, NG Hua, M Hein… - arXiv preprint arXiv …, 2022 - arxiv.org
Adversarial patch attacks are an emerging security threat for real world deep learning
applications. We present Demasked Smoothing, the first approach (up to our knowledge) to …

Vip: Unified certified detection and recovery for patch attack with vision transformers

J Li, H Zhang, C Xie - European Conference on Computer Vision, 2022 - Springer
Patch attack, which introduces a perceptible but localized change to the input image, has
gained significant momentum in recent years. In this paper, we present a unified framework …

Transformers: A Security Perspective

BS Latibari, N Nazari, MA Chowdhury, KI Gubbi… - IEEE …, 2024 - ieeexplore.ieee.org
The Transformers architecture has recently emerged as a revolutionary paradigm in the field
of deep learning, particularly excelling in Natural Language Processing (NLP) and …

Towards Robust Semantic Segmentation against Patch-Based Attack via Attention Refinement

Z Yuan, J Zhang, Y Wang, S Shan, X Chen - International Journal of …, 2024 - Springer
The attention mechanism has been proven effective on various visual tasks in recent years.
In the semantic segmentation task, the attention mechanism is applied in various methods …

Vulnerability of cnns against multi-patch attacks

A Sharma, Y Bian, V Nanda, P Munz… - Proceedings of the 2023 …, 2023 - dl.acm.org
Convolutional Neural Networks have become an integral part of anomaly detection in Cyber-
Physical Systems (CPS). Although highly accurate, the advent of adversarial patches …

[PDF][PDF] Short: Certifiably Robust Perception Against Adversarial Patch Attacks: A Survey

C Xiang, C Sitawarin, T Wu… - … Symposium on Vehicle …, 2023 - ndss-symposium.org
The physical-world adversarial patch attack poses a security threat to AI perception models
in autonomous vehicles. To mitigate this threat, researchers have designed defenses with …

基于光学的物理域对抗攻防综述

陈晋音, 赵晓明, 郑海斌, 郭海锋 - 网络与信息安全学报, 2024 - infocomm-journal.com
对抗攻击是指通过在原始输入中植入人眼无法察觉的微小扰动, 误导深度学习模型做出错误预测
的攻击. 与数字域对抗攻击相比, 物理域对抗攻击可实现对抗性输入被采集设备捕获并转换为 …