作者
Fuyong Zhang, Yi Wang, Shigang Liu, Hua Wang
发表日期
2020/9
期刊
World Wide Web
卷号
23
页码范围
2957-2977
出版商
Springer US
简介
Learning-based classifiers are found to be susceptible to adversarial examples. Recent studies suggested that ensemble classifiers tend to be more robust than single classifiers against evasion attacks. In this paper, we argue that this is not necessarily the case. In particular, we show that a discrete-valued random forest classifier can be easily evaded by adversarial inputs manipulated based only on the model decision outputs. The proposed evasion algorithm is gradient free and can be fast implemented. Our evaluation results demonstrate that random forests can be even more vulnerable than SVMs, either single or ensemble, to evasion attacks under both white-box and the more realistic black-box settings.
引用总数
2020202120222023202471126193
学术搜索中的文章