Defending against model stealing via verifying embedded external features

Y Li, L Zhu, X Jia, Y Jiang, ST Xia, X Cao - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Obtaining a well-trained model involves expensive data collection and training procedures,
therefore the model is a valuable intellectual property. Recent studies revealed that …

Dataset inference: Ownership resolution in machine learning

P Maini, M Yaghini, N Papernot - arXiv preprint arXiv:2104.10706, 2021 - arxiv.org
With increasingly more data and computation involved in their training, machine learning
models constitute valuable intellectual property. This has spurred interest in model stealing …

Increasing the cost of model extraction with calibrated proof of work

A Dziedzic, MA Kaleem, YS Lu, N Papernot - arXiv preprint arXiv …, 2022 - arxiv.org
In model extraction attacks, adversaries can steal a machine learning model exposed via a
public API by repeatedly querying it and adjusting their own model based on obtained …

Data-free model extraction

JB Truong, P Maini, RJ Walls… - Proceedings of the …, 2021 - openaccess.thecvf.com
Current model extraction attacks assume that the adversary has access to a surrogate
dataset with characteristics similar to the proprietary data used to train the victim model. This …

Towards data-free model stealing in a hard label setting

S Sanyal, S Addepalli, RV Babu - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abstract Machine learning models deployed as a service (MLaaS) are susceptible to model
stealing attacks, where an adversary attempts to steal the model within a restricted access …

Dataset inference for self-supervised models

A Dziedzic, H Duan, MA Kaleem… - Advances in …, 2022 - proceedings.neurips.cc
Self-supervised models are increasingly prevalent in machine learning (ML) since they
reduce the need for expensively labeled data. Because of their versatility in downstream …

Defending against model stealing attacks with adaptive misinformation

S Kariyappa, MK Qureshi - … of the IEEE/CVF Conference on …, 2020 - openaccess.thecvf.com
Abstract Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which
allows a data-limited adversary with no knowledge of the training dataset to clone the …

Stealing machine learning models: Attacks and countermeasures for generative adversarial networks

H Hu, J Pang - Proceedings of the 37th Annual Computer Security …, 2021 - dl.acm.org
Model extraction attacks aim to duplicate a machine learning model through query access to
a target model. Early studies mainly focus on discriminative models. Despite the success …

APMSA: Adversarial perturbation against model stealing attacks

J Zhang, S Peng, Y Gao, Z Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Training a Deep Learning (DL) model requires proprietary data and computing-intensive
resources. To recoup their training costs, a model provider can monetize DL models through …

Defending against machine learning model stealing attacks using deceptive perturbations

T Lee, B Edwards, I Molloy, D Su - arXiv preprint arXiv:1806.00054, 2018 - arxiv.org
Machine learning models are vulnerable to simple model stealing attacks if the adversary
can obtain output labels for chosen inputs. To protect against these attacks, it has been …