作者
Chunchuan Lyu, Kaizhu Huang, Hai-Ning Liang
发表日期
2015/11/14
研讨会论文
2015 IEEE international conference on data mining
页码范围
301-309
出版商
IEEE
简介
Adversarial examples are augmented data points generated by imperceptible perturbation of input samples. They have recently drawn much attention with the machine learning and data mining community. Being difficult to distinguish from real examples, such adversarial examples could change the prediction of many of the best learning models including the state-of-the-art deep learning models. Recent attempts have been made to build robust models that take into account adversarial examples. However, these methods can either lead to performance drops or lack mathematical motivations. In this paper, we propose a unified framework to build robust machine learning models against adversarial examples. More specifically, using the unified framework, we develop a family of gradient regularization methods that effectively penalize the gradient of loss function w.r.t. inputs. Our proposed framework is appealing in …
引用总数
2016201720182019202020212022202320248719264151334219
学术搜索中的文章
C Lyu, K Huang, HN Liang - 2015 IEEE international conference on data mining, 2015