作者
Sebastian Szyller, Buse Gul Atli, Samuel Marchal, N Asokan
发表日期
2021/10/17
图书
Proceedings of the 29th ACM International Conference on Multimedia
页码范围
4417-4425
简介
Training machine learning (ML) models is expensive in terms of computational power, amounts of labeled data and human expertise. Thus, ML models constitute business value for their owners. Embedding digital watermarks during model training allows a model owner to later identify their models in case of theft or misuse. However, model functionality can also be stolen via model extraction, where an adversary trains a surrogate model using results returned from a prediction API of the original model. Recent work has shown that model extraction is a realistic threat. Existing watermarking schemes are ineffective against model extraction since it is the adversary who trains the surrogate model. In this paper, we introduce DAWN (Dynamic Adversarial Watermarking of Neural Networks), the first approach to use watermarking to deter model extraction theft. Unlike prior watermarking schemes, DAWN does not impose …
引用总数
学术搜索中的文章
S Szyller, BG Atli, S Marchal, N Asokan - Proceedings of the 29th ACM International Conference …, 2021