D-dae: Defense-penetrating model extraction attacks

Y Chen, R Guan, X Gong, J Dong… - 2023 IEEE Symposium …, 2023 - ieeexplore.ieee.org
Recent studies show that machine learning models are vulnerable to model extraction
attacks, where the adversary builds a substitute model that achieves almost the same …

[PDF][PDF] InverseNet: Augmenting Model Extraction Attacks with Training Data Inversion.

X Gong, Y Chen, W Yang, G Mei, Q Wang - IJCAI, 2021 - ijcai.org
Cloud service providers, including Google, Amazon, and Alibaba, have now launched
machinelearning-as-a-service (MLaaS) platforms, allowing clients to access sophisticated …

Defending against data-free model extraction by distributionally robust defensive training

Z Wang, L Shen, T Liu, T Duan, Y Zhu… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Data-Free Model Extraction (DFME) aims to clone a black-box model without
knowing its original training data distribution, making it much easier for attackers to steal …

A comprehensive defense framework against model extraction attacks

W Jiang, H Li, G Xu, T Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
As a promising service, Machine Learning as a Service (MLaaS) provides personalized
inference functions for clients through paid APIs. Nevertheless, it is vulnerable to model …

Defending against machine learning model stealing attacks using deceptive perturbations

T Lee, B Edwards, I Molloy, D Su - arXiv preprint arXiv:1806.00054, 2018 - arxiv.org
Machine learning models are vulnerable to simple model stealing attacks if the adversary
can obtain output labels for chosen inputs. To protect against these attacks, it has been …

Meaod: Model extraction attack against object detectors

Z Li, C Shi, Y Pu, X Zhang, Y Li, J Li, S Ji - arXiv preprint arXiv:2312.14677, 2023 - arxiv.org
The widespread use of deep learning technology across various industries has made deep
neural network models highly valuable and, as a result, attractive targets for potential …

Isolation and induction: Training robust deep neural networks against model stealing attacks

J Guo, X Zheng, A Liu, S Liang, Y Xiao, Y Wu… - Proceedings of the 31st …, 2023 - dl.acm.org
Despite the broad application of Machine Learning models as a Service (MLaaS), they are
vulnerable to model stealing attacks. These attacks can replicate the model functionality by …

SAME: Sample Reconstruction against Model Extraction Attacks

Y Xie, J Zhang, S Zhao, T Zhang, X Chen - Proceedings of the AAAI …, 2024 - ojs.aaai.org
While deep learning models have shown significant performance across various domains,
their deployment needs extensive resources and advanced computing infrastructure. As a …

Megex: Data-free model extraction attack against gradient-based explainable ai

T Miura, T Shibahara, N Yanai - Proceedings of the 2nd ACM Workshop …, 2024 - dl.acm.org
Explainable AI encourages machine learning applications in the real world, whereas data-
free model extraction attacks (DFME), in which an adversary steals a trained machine …

Increasing the cost of model extraction with calibrated proof of work

A Dziedzic, MA Kaleem, YS Lu, N Papernot - arXiv preprint arXiv …, 2022 - arxiv.org
In model extraction attacks, adversaries can steal a machine learning model exposed via a
public API by repeatedly querying it and adjusting their own model based on obtained …