作者
Hyun Kwon, Yongchul Kim, Hyunsoo Yoon, Daeseon Choi
发表日期
2019/6/27
期刊
IEEE Transactions on Information Forensics and Security
卷号
15
页码范围
526-538
出版商
IEEE
简介
Deep neural networks (DNNs) are widely used for image recognition, speech recognition, and other pattern analysis tasks. Despite the success of DNNs, these systems can be exploited by what is termed adversarial examples. An adversarial example, in which a small distortion is added to the input data, can be designed to be misclassified by the DNN while remaining undetected by humans or other systems. Such adversarial examples have been studied mainly in the image domain. Recently, however, studies on adversarial examples have been expanding into the voice domain. For example, when an adversarial example is applied to enemy wiretapping devices (victim classifiers) in a military environment, the enemy device will misinterpret the intended message. In such scenarios, it is necessary that friendly wiretapping devices (protected classifiers) should not be deceived. Therefore, the selective adversarial …
引用总数
201920202021202220232024272218238
学术搜索中的文章
H Kwon, Y Kim, H Yoon, D Choi - IEEE Transactions on Information Forensics and …, 2019