Morié attack (ma): A new potential risk of screen photos

D Niu, R Guo, Y Wang - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Images, captured by a camera, play a critical role in training Deep Neural Networks (DNNs).
Usually, we assume the images acquired by cameras are consistent with the ones perceived …

Moiré pattern detection using wavelet decomposition and convolutional neural network

E Abraham - 2018 IEEE Symposium Series on Computational …, 2018 - ieeexplore.ieee.org
Moiré patterns are interference patterns that are produced due to the overlap of the digital
grids of the camera sensor resulting in a high-frequency noise in the image. This paper …

Backdoor attack through frequency domain

T Wang, Y Yao, F Xu, S An, H Tong, T Wang - arXiv preprint arXiv …, 2021 - arxiv.org
Backdoor attacks have been shown to be a serious threat against deep learning systems
such as biometric authentication and autonomous driving. An effective backdoor attack …

Color backdoor: A robust poisoning attack in color space

W Jiang, H Li, G Xu, T Zhang - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Backdoor attacks against neural networks have been intensively investigated, where the
adversary compromises the integrity of the victim model, causing it to make wrong …

An invisible black-box backdoor attack through frequency domain

T Wang, Y Yao, F Xu, S An, H Tong, T Wang - European Conference on …, 2022 - Springer
Backdoor attacks have been shown to be a serious threat against deep learning systems
such as biometric authentication and autonomous driving. An effective backdoor attack …

Kaleidoscope: Physical backdoor attacks against deep neural networks with RGB filters

X Gong, Z Wang, Y Chen, M Xue… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Recent research has shown that deep neural networks are vulnerable to backdoor attacks. A
carefully-designed backdoor trigger will mislead the victim model to misclassify any sample …

FDNet: Imperceptible backdoor attacks via frequency domain steganography and negative sampling

L Dong, Z Fu, L Chen, H Ding, C Zheng, X Cui, Z Shen - Neurocomputing, 2024 - Elsevier
Abstract Backdoor attacks against Deep Neural Networks (DNNs) have surfaced as a
substantial and concerning security challenge. These backdoor vulnerabilities in DNNs can …

One-bit flip is all you need: When bit-flip attack meets model training

J Dong, H Qiu, Y Li, T Zhang, Y Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
Deep neural networks (DNNs) are widely deployed on real-world devices. Concerns
regarding their security have gained great attention from researchers. Recently, a new …

Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning

Z Wang, J Zhai, S Ma - … of the IEEE/CVF Conference on …, 2022 - openaccess.thecvf.com
Deep neural networks are vulnerable to Trojan attacks. Existing attacks use visible patterns
(eg, a patch or image transformations) as triggers, which are vulnerable to human …

Adversarial Neon Beam: A light-based physical attack to DNNs

C Hu, W Shi, L Tian, W Li - Computer Vision and Image Understanding, 2024 - Elsevier
In the physical world, the interplay of light and shadow can significantly impact the
performance of deep neural networks (DNNs), leading to substantial consequences, as …