Robust adversarial perturbation on deep proposal-based models

Y Li, D Tian, MC Chang, X Bian, S Lyu - arXiv preprint arXiv:1809.05962, 2018 - arxiv.org
arXiv preprint arXiv:1809.05962, 2018arxiv.org
Adversarial noises are useful tools to probe the weakness of deep learning based computer
vision algorithms. In this paper, we describe a robust adversarial perturbation (R-AP)
method to attack deep proposal-based object detectors and instance segmentation
algorithms. Our method focuses on attacking the common component in these algorithms,
namely Region Proposal Network (RPN), to universally degrade their performance in a
black-box fashion. To do so, we design a loss function that combines a label loss and a …
Adversarial noises are useful tools to probe the weakness of deep learning based computer vision algorithms. In this paper, we describe a robust adversarial perturbation (R-AP) method to attack deep proposal-based object detectors and instance segmentation algorithms. Our method focuses on attacking the common component in these algorithms, namely Region Proposal Network (RPN), to universally degrade their performance in a black-box fashion. To do so, we design a loss function that combines a label loss and a novel shape loss, and optimize it with respect to image using a gradient based iterative algorithm. Evaluations are performed on the MS COCO 2014 dataset for the adversarial attacking of 6 state-of-the-art object detectors and 2 instance segmentation algorithms. Experimental results demonstrate the efficacy of the proposed method.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果