A Zhao, T Chu, Y Liu, W Li, J Li… - Proceedings of the …, 2023 - openaccess.thecvf.com
In this work, we study the black-box targeted attack problem from the model discrepancy perspective. On the theoretical side, we present a generalization error bound for black-box …
This work studies black-box adversarial attacks against deep neural networks (DNNs), where the attacker can only access the query feedback returned by the attacked DNN …
Deep models have shown their vulnerability when processing adversarial samples. As for the black-box attack, without access to the architecture and weights of the attacked model …
Z Wang, H Yang, Y Feng, P Sun… - Proceedings of the …, 2023 - openaccess.thecvf.com
Transferability of adversarial examples is critical for black-box deep learning model attacks. While most existing studies focus on enhancing the transferability of untargeted adversarial …
This paper addresses the challenging black-box adversarial attack problem, where only classification confidence of a victim model is available. Inspired by consistency of visual …
Z Yuan, J Zhang, Y Jia, C Tan… - Proceedings of the …, 2021 - openaccess.thecvf.com
In recent years, research on adversarial attacks has become a hot spot. Although current literature on the transfer-based adversarial attack has achieved promising results for …
Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, ie, they remain adversarial even against other …
Q Li, Y Guo, W Zuo, H Chen - arXiv preprint arXiv:2302.05086, 2023 - arxiv.org
The transferability of adversarial examples across deep neural networks (DNNs) is the crux of many black-box attacks. Many prior efforts have been devoted to improving the …
M Li, C Deng, T Li, J Yan, X Gao… - Proceedings of the …, 2020 - openaccess.thecvf.com
An intriguing property of adversarial examples is their transferability, which suggests that black-box attacks are feasible in real-world applications. Previous works mostly study the …