作者
Seok-Hwan Choi, Jin-Myeong Shin, Peng Liu, Yoon-Ho Choi
发表日期
2022/3/16
期刊
IEEE Access
卷号
10
页码范围
33602-33615
出版商
IEEE
简介
An adversarial example, which is an input instance with small, intentional feature perturbations to machine learning models, represents a concrete problem in Artificial intelligence safety. As an emerging defense method to defend against adversarial examples, generative adversarial networks-based defense methods have recently been studied. However, the performance of the state-of-the-art generative adversarial networks-based defense methods is limited because the target deep neural network models with generative adversarial networks-based defense methods are robust against adversarial examples but make a false decision for legitimate input data . To solve the accuracy degradation of the generative adversarial networks-based defense methods for legitimate input data , we propose a new generative adversarial networks-based defense method, which is called Adversarially Robust Generative …
引用总数