作者
Haibo Zhang, Kouichi Sakurai
发表日期
2021/12/22
期刊
IEEE Access
卷号
9
页码范围
169031-169043
出版商
IEEE
简介
Deep learning has become one of the most popular research topics today. Researchers have developed cutting-edge learning algorithms and frameworks around deep learning, applying them to a wide range of fields to solve real-world problems. However, we are more concerned about the security risks associated with deep learning models, such as adversarial attacks, which this article will discuss. Attackers can use the deep learning model to create the conditions for an attack, maliciously manipulating the input images to deceive the classification model and produce false positives. This paper proposes a method of pre-denoising all input images to prevent adversarial attacks by adding a purification layer before the classification model. The method in this paper is proposed based on the basic architecture of Conditional Generative Adversarial Networks. It adds the image perception loss to the original algorithm …
引用总数