Unrestricted adversarial attacks typically manipulate the semantic content of an image (eg, color or texture) to create adversarial examples that are both effective and photorealistic …
Computer vision applications like traffic monitoring, security checks, self-driving cars, medical imaging, etc., rely heavily on machine learning models. It raises an essential …
K Wang, X He, W Wang… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Adversarial examples mislead deep neural networks with imperceptible perturbations and have brought significant threats to deep learning. An important aspect is their transferability …
Recent advances in instruction tuning have led to the development of State-of-the-Art Large Multimodal Models (LMMs). Given the novelty of these models the impact of visual …
Many works have shown that the adversarial examples being generated on a known substitute model have the ability to mislead other unknown black-box models, which has …
Abstract Deep Neural Networks (DNNs) are vulnerable to adversarial examples, which are crafted by adding human-imperceptible perturbations to the benign inputs. Simultaneously …
Y Ma, M Dong, C Xu - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Deep neural networks have been found to be vulnerable in a variety of tasks. Adversarial attacks can manipulate network outputs, resulting in incorrect predictions. Adversarial …
F Mumcu, Y Yilmaz - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Adversarial machine learning attacks on video action recognition models is a growing research area and many effective attacks were introduced in recent years. These attacks …
C Shi, Y Liu, M Zhao, CM Pun, Q Miao - Pattern Recognition, 2024 - Elsevier
Although deep neural networks (DNNs) have achieved excellent performance on hyperspectral image (HSI) classification tasks, their robustness is threatened by carefully …