作者
Haibo Zhang, Zhihua Yao, Kouichi Sakurai
发表日期
2023/6/19
图书
International Conference on Applied Cryptography and Network Security
页码范围
601-620
出版商
Springer Nature Switzerland
简介
Convolutional neural networks are widely used for image recognition tasks, but they are vulnerable to adversarial attacks that can cause the model to misclassify an image. Such attacks pose a significant security risk in safety-critical applications like facial recognition and autonomous driving. Researchers have made progress in defending against adversarial attacks through two approaches: enhancing the neural networks themselves to be more robust and removing the perturbation added to the image through pre-processing. This paper is based upon a recent defense model that belongs to the latter approach, which utilizes image-to-image translation to regenerate images perturbed by adversarial attacks. We optimized the training process of their model and tested the model performance against more recent and strong attacks. The results show that the model is able to regenerate images attacked by the state-of …
引用总数
学术搜索中的文章
H Zhang, Z Yao, K Sakurai - International Conference on Applied Cryptography and …, 2023