作者
Salah Ud Din, Naveed Akhtar, Shahzad Younis, Faisal Shafait, Atif Mansoor, Muhammad Shafique
发表日期
2020/7/1
期刊
Pattern Recognition Letters
卷号
135
页码范围
146-152
出版商
North-Holland
简介
We propose a steganography based technique to generate adversarial perturbations to fool deep models on any image. The proposed perturbations are computed in a transform domain where a single secret image embedded in any target image makes any deep model misclassify the target image with high probability. The attack resulting from our perturbation is ideal for black-box setting, as it does not require any information about the target model. Moreover, being a non-iterative technique, our perturbation estimation remains computationally efficient. The computed perturbations are also imperceptible to humans while they achieve high fooling ratios for the models trained on large-scale ImageNet dataset. We demonstrate successful fooling of ResNet-50, VGG-16, Inception-V3 and MobileNet-V2, achieving up to 89% fooling of these popular classification models.
引用总数
20202021202220232024261033
学术搜索中的文章
SU Din, N Akhtar, S Younis, F Shafait, A Mansoor… - Pattern Recognition Letters, 2020