Sparsefool: a few pixels make a big difference

A Modas, SM Moosavi-Dezfooli… - Proceedings of the …, 2019 - openaccess.thecvf.com
Proceedings of the IEEE/CVF conference on computer vision and …, 2019openaccess.thecvf.com
Abstract Deep Neural Networks have achieved extraordinary results on image classification
tasks, but have been shown to be vulnerable to attacks with carefully crafted perturbations of
the input data. Although most attacks usually change values of many image's pixels, it has
been shown that deep networks are also vulnerable to sparse alterations of the input.
However, no computationally efficient method has been proposed to compute sparse
perturbations. In this paper, we exploit the low mean curvature of the decision boundary, and …
Abstract
Deep Neural Networks have achieved extraordinary results on image classification tasks, but have been shown to be vulnerable to attacks with carefully crafted perturbations of the input data. Although most attacks usually change values of many image's pixels, it has been shown that deep networks are also vulnerable to sparse alterations of the input. However, no computationally efficient method has been proposed to compute sparse perturbations. In this paper, we exploit the low mean curvature of the decision boundary, and propose SparseFool, a geometry inspired sparse attack that controls the sparsity of the perturbations. Extensive evaluations show that our approach computes sparse perturbations very fast, and scales efficiently to high dimensional data. We further analyze the transferability and the visual effects of the perturbations, and show the existence of shared semantic information across the images and the networks. Finally, we show that adversarial training can only slightly improve the robustness against sparse additive perturbations computed with SparseFool.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References