作者
Yuezun Li, Daniel Tian, Ming-Ching Chang, Xiao Bian, Siwei Lyu
发表日期
2018/9/16
期刊
arXiv preprint arXiv:1809.05962
简介
Adversarial noises are useful tools to probe the weakness of deep learning based computer vision algorithms. In this paper, we describe a robust adversarial perturbation (R-AP) method to attack deep proposal-based object detectors and instance segmentation algorithms. Our method focuses on attacking the common component in these algorithms, namely Region Proposal Network (RPN), to universally degrade their performance in a black-box fashion. To do so, we design a loss function that combines a label loss and a novel shape loss, and optimize it with respect to image using a gradient based iterative algorithm. Evaluations are performed on the MS COCO 2014 dataset for the adversarial attacking of 6 state-of-the-art object detectors and 2 instance segmentation algorithms. Experimental results demonstrate the efficacy of the proposed method.
引用总数
201820192020202120222023202443201917327
学术搜索中的文章
Y Li, D Tian, MC Chang, X Bian, S Lyu - arXiv preprint arXiv:1809.05962, 2018