Understanding Model Extraction Games

X Xian, M Hong, J Ding - … on Trust, Privacy and Security in …, 2022 - ieeexplore.ieee.org
The privacy of machine learning models has become a significant concern in many
emerging Machine-Learning-as-a-Service applications, where prediction services based on …

A framework for understanding model extraction attack and defense

X Xian, M Hong, J Ding - arXiv preprint arXiv:2206.11480, 2022 - arxiv.org
The privacy of machine learning models has become a significant concern in many
emerging Machine-Learning-as-a-Service applications, where prediction services based on …

A Defense Framework for Privacy Risks in Remote Machine Learning Service

Y Bai, Y Li, M Xie, M Fan - Security and Communication …, 2021 - Wiley Online Library
In recent years, machine learning approaches have been widely adopted for many
applications, including classification. Machine learning models deal with collective sensitive …

APMSA: Adversarial perturbation against model stealing attacks

J Zhang, S Peng, Y Gao, Z Zhang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Training a Deep Learning (DL) model requires proprietary data and computing-intensive
resources. To recoup their training costs, a model provider can monetize DL models through …

Ml defense: against prediction API threats in cloud-based machine learning service

J Hou, J Qian, Y Wang, XY Li, H Du… - Proceedings of the …, 2019 - dl.acm.org
Machine learning (ML) has shown its impressive performance in the modern world, and
many corporations leverage the technique of machine learning to improve their service …

The QoS and privacy trade-off of adversarial deep learning: an evolutionary game approach

Z Sun, L Yin, C Li, W Zhang, A Li, Z Tian - Computers & Security, 2020 - Elsevier
Deep learning-based service has received great success in many fields and changed our
daily lives profoundly. To support such service, the provider needs to continually collect data …

Information laundering for model privacy

X Wang, Y Xiang, J Gao, J Ding - arXiv preprint arXiv:2009.06112, 2020 - arxiv.org
In this work, we propose information laundering, a novel framework for enhancing model
privacy. Unlike data privacy that concerns the protection of raw data information, model …

Careful what you wish for: on the extraction of adversarially trained models

K Khaled, G Nicolescu… - 2022 19th Annual …, 2022 - ieeexplore.ieee.org
Recent attacks on Machine Learning (ML) models such as evasion attacks with adversarial
examples and models stealing through extraction attacks pose several security and privacy …

Increasing the cost of model extraction with calibrated proof of work

A Dziedzic, MA Kaleem, YS Lu, N Papernot - arXiv preprint arXiv …, 2022 - arxiv.org
In model extraction attacks, adversaries can steal a machine learning model exposed via a
public API by repeatedly querying it and adjusting their own model based on obtained …

Bad citrus: Reducing adversarial costs with model distances

G Severi, W Pearce, A Oprea - 2022 21st IEEE International …, 2022 - ieeexplore.ieee.org
Recent work by Jia et al.[1], showed the possibility of effectively computing pairwise model
distances in weight space, using a model explanation technique known as LIME. This …