Spatially localized perturbation GAN (SLP-GAN) for generating invisible adversarial patches

Y Kim, H Kang, A Mukaroh, N Suryanto… - … on Information Security …, 2020 - Springer
International Conference on Information Security Applications, 2020Springer
Abstract Deep Neural Networks (DNNs) are very vulnerable to adversarial attacks because
of the instability and unreliability under the training process. Recently, many studies about
adversarial patches have been conducted that aims to misclassify the image classifier model
by attaching patches to images. However, most of the previous research employs
adversarial patches that are visible to human vision, making them easy to be identified and
responded to. In this paper, we propose a new method entitled Spatially Localized …
Abstract
Deep Neural Networks (DNNs) are very vulnerable to adversarial attacks because of the instability and unreliability under the training process. Recently, many studies about adversarial patches have been conducted that aims to misclassify the image classifier model by attaching patches to images. However, most of the previous research employs adversarial patches that are visible to human vision, making them easy to be identified and responded to. In this paper, we propose a new method entitled Spatially Localized Perturbation GAN (SLP-GAN) that can generate visually natural patches while maintaining a high attack success rate. SLP-GAN utilizes a spatially localized perturbation taken from the most representative area of target images (i.e., attention map) as the adversarial patches. The patch region is extracted using the Grad-CAM algorithm to improve the attacking ability against the target model. Our experiment, tested on GTSRB and CIFAR-10 datasets, shows that SLP-GAN outperforms the state-of-the-art adversarial patch attack methods in terms of visual fidelity.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果