Dawn: Dynamic adversarial watermarking of neural networks

S Szyller, BG Atli, S Marchal, N Asokan - Proceedings of the 29th ACM …, 2021 - dl.acm.org
Proceedings of the 29th ACM International Conference on Multimedia, 2021dl.acm.org
Training machine learning (ML) models is expensive in terms of computational power,
amounts of labeled data and human expertise. Thus, ML models constitute business value
for their owners. Embedding digital watermarks during model training allows a model owner
to later identify their models in case of theft or misuse. However, model functionality can also
be stolen via model extraction, where an adversary trains a surrogate model using results
returned from a prediction API of the original model. Recent work has shown that model …
Training machine learning (ML) models is expensive in terms of computational power, amounts of labeled data and human expertise. Thus, ML models constitute business value for their owners. Embedding digital watermarks during model training allows a model owner to later identify their models in case of theft or misuse. However, model functionality can also be stolen via model extraction, where an adversary trains a surrogate model using results returned from a prediction API of the original model. Recent work has shown that model extraction is a realistic threat. Existing watermarking schemes are ineffective against model extraction since it is the adversary who trains the surrogate model. In this paper, we introduce DAWN (Dynamic Adversarial Watermarking of Neural Networks), the first approach to use watermarking to deter model extraction theft. Unlike prior watermarking schemes, DAWN does not impose changes to the training process but operates at the prediction API of the protected model, by dynamically changing the responses for a small subset of queries (e.g., 0.5%) from API clients. This set is a watermark that will be embedded in case a client uses its queries to train a surrogate model. We show that DAWN is resilient against two state-of-the-art model extraction attacks, effectively watermarking all extracted surrogate models, allowing model owners to reliably demonstrate ownership (with confidence greater than 1-2-64), incurring negligible loss of prediction accuracy (0.03-0.5%).
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果