H Wang, K Dong, Z Zhu, H Qin, A Liu, X Fang… - 2024 IEEE Symposium …, 2024 - computer.org
Abstract Vision-Language Pre-training (VLP) models have achieved remarkable success in practice, while easily being misled by adversarial attack. Though harmful, adversarial …
H Waghela, J Sen, S Rakshit - arXiv preprint arXiv:2408.13274, 2024 - arxiv.org
Adversarial attacks, particularly the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) pose significant threats to the robustness of deep learning models …
Y Zhang, R Xie, J Chen, X Sun, Y Wang - Proceedings of the 32nd ACM …, 2024 - dl.acm.org
Large Vision-Language Models (LVLMs) have demonstrated their powerful multimodal capabilities. However, they also face serious safety problems, as adversaries can induce …
There has been a surge of interest in using machine learning (ML) to automatically detect malware through their dynamic behaviors. These approaches have achieved significant …
Deep neural networks (DNNs) are vulnerable to the adversarial attack which is maliciously implemented by adding human-imperceptible perturbation to images and thus leads to …
Defenses against adversarial attacks are essential to ensure the reliability of machine- learning models as their applications are expanding in different domains. Existing ML …
Various adversarial attack methods pose a threat to secure machine learning models. Pre- processing-based defense against adversarial input was not adequate, and they are …
Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of …
Detection-based defense approaches are effective against adversarial attacks without compromising the structure of the protected model. However, they could be bypassed by …