作者
Mika Juuti, Sebastian Szyller, Samuel Marchal, N Asokan
发表日期
2019/6/17
研讨会论文
2019 IEEE European Symposium on Security and Privacy (EuroS&P)
页码范围
512-527
出版商
IEEE
简介
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and …
引用总数
2018201920202021202220232024420538610914278
学术搜索中的文章
M Juuti, S Szyller, S Marchal, N Asokan - 2019 IEEE European Symposium on Security and …, 2019