Perturbing inputs to prevent model stealing

J Grana - 2020 IEEE Conference on Communications and …, 2020 - ieeexplore.ieee.org
We show how perturbing inputs to machine learning services (ML-service) deployed in the
cloud can protect against model stealing attacks. In our formulation, there is an ML-service …

Model extraction attacks and defenses on cloud-based machine learning models

X Gong, Q Wang, Y Chen, W Yang… - IEEE Communications …, 2020 - ieeexplore.ieee.org
Machine learning models have achieved state-of-the-art performance in various fields, from
image classification to speech recognition. However, such models are trained with a large …

Defending against neural network model stealing attacks using deceptive perturbations

T Lee, B Edwards, I Molloy, D Su - 2019 IEEE Security and …, 2019 - ieeexplore.ieee.org
Machine learning architectures are readily available, but obtaining the high quality labeled
data for training is costly. Pre-trained models available as cloud services can be used to …

Detection of compromised models using Bayesian optimization

DP Kuttichira, S Gupta, D Nguyen, S Rana… - AI 2019: Advances in …, 2019 - Springer
Modern AI is largely driven by machine learning. Recent machine learning algorithms such
as deep neural networks (DNN) have become quite effective in many recognition tasks eg …

Special-Purpose Model Extraction Attacks: Stealing Coarse Model with Fewer Queries

R Okada, Z Ishikura, T Shibahara… - 2020 IEEE 19th …, 2020 - ieeexplore.ieee.org
Model extraction (ME) attacks have been shown to cause financial losses for Machine-
Learning-as-a-Service (MLaaS) providers. Attackers steal ML models on MLaaS platforms …

Exploring connections between active learning and model extraction

V Chandrasekaran, K Chaudhuri, I Giacomelli… - 29th USENIX Security …, 2020 - usenix.org
Machine learning is being increasingly used by individuals, research institutions, and
corporations. This has resulted in the surge of Machine Learning-as-a-Service (MLaaS) …

APMSA: Adversarial perturbation against model stealing attacks

J Zhang, S Peng, Y Gao, Z Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Training a Deep Learning (DL) model requires proprietary data and computing-intensive
resources. To recoup their training costs, a model provider can monetize DL models through …

Fdinet: Protecting against dnn model extraction via feature distortion index

H Yao, Z Li, H Weng, F Xue, K Ren, Z Qin - arXiv preprint arXiv …, 2023 - arxiv.org
Machine Learning as a Service (MLaaS) platforms have gained popularity due to their
accessibility, cost-efficiency, scalability, and rapid development capabilities. However …

Efficient Data-Free Model Stealing with Label Diversity

Y Liu, R Wen, M Backes, Y Zhang - arXiv preprint arXiv:2404.00108, 2024 - arxiv.org
Machine learning as a Service (MLaaS) allows users to query the machine learning model
in an API manner, which provides an opportunity for users to enjoy the benefits brought by …

Stateful detection of model extraction attacks

S Pal, Y Gupta, A Kanade, S Shevade - arXiv preprint arXiv:2107.05166, 2021 - arxiv.org
Machine-Learning-as-a-Service providers expose machine learning (ML) models through
application programming interfaces (APIs) to developers. Recent work has shown that …