First to possess his statistics: Data-free model extraction attack on tabular data

M Tasumi, K Iwahana, N Yanai, K Shishido… - arXiv preprint arXiv …, 2021 - arxiv.org
Model extraction attacks are a kind of attacks where an adversary obtains a machine
learning model whose performance is comparable with one of the victim model through …

Obfuscating Evasive Decision Trees

S Banerjee, SD Galbraith, G Russello - International Conference on …, 2023 - Springer
We present a new encoder for hiding parameters in an interval membership function. As an
application, we design a simple and efficient virtual black-box obfuscator for evasive …

Confined gradient descent: Privacy-preserving optimization for federated learning

Y Zhang, G Bai, X Li, S Nepal, RKL Ko - arXiv preprint arXiv:2104.13050, 2021 - arxiv.org
Federated learning enables multiple participants to collaboratively train a model without
aggregating the training data. Although the training data are kept within each participant and …

Monitoring-based differential privacy mechanism against query-flooding parameter duplication attack

H Yan, X Li, H Li, J Li, W Sun, F Li - arXiv preprint arXiv:2011.00418, 2020 - arxiv.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …

Differentially private machine learning model against model extraction attack

Z Cheng, Z Li, J Zhang, S Zhang - … International Conferences on …, 2020 - ieeexplore.ieee.org
Machine learning model is vulnerable to model extraction attacks since the attackers can
send plenty of queries to infer the hyperparameters of the machine learning model thus …

Holistic Implicit Factor Evaluation of Model Extraction Attacks

A Yan, H Yan, L Hu, X Liu… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Model extraction attacks (MEAs) allow adversaries to replicate a surrogate model analogous
to the target model's decision pattern. While several attacks and defenses have been …

BODAME: Bilevel optimization for defense against model extraction

Y Mori, A Nitanda, A Takeda - arXiv preprint arXiv:2103.06797, 2021 - arxiv.org
Model extraction attacks have become serious issues for service providers using machine
learning. We consider an adversarial setting to prevent model extraction under the …

DAS-AST: Defending against model stealing attacks based on adaptive softmax transformation

J Chen, C Wu, S Shen, X Zhang, J Chen - Information Security and …, 2021 - Springer
Abstract Deep Neural Networks (DNNs) have been widely applied to diverse real life
applications and dominated in most cases. Considering the hardware consumption for DNN …

Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Models

S Szyller, V Duddu, T Gröndahl, N Asokan - arXiv preprint arXiv …, 2021 - arxiv.org
Machine learning models are typically made available to potential client users via inference
APIs. Model extraction attacks occur when a malicious client uses information gleaned from …

Understanding Model Extraction Games

X Xian, M Hong, J Ding - … on Trust, Privacy and Security in …, 2022 - ieeexplore.ieee.org
The privacy of machine learning models has become a significant concern in many
emerging Machine-Learning-as-a-Service applications, where prediction services based on …