BDPL: A boundary differentially private layer against machine learning model extraction attacks

H Zheng, Q Ye, H Hu, C Fang, J Shi - … 23–27, 2019, Proceedings, Part I 24, 2019 - Springer
Abstract Machine learning models trained by large volume of proprietary data and intensive
computational resources are valuable assets of their owners, who merchandise these …

Protecting decision boundary of machine learning model with differentially private perturbation

H Zheng, Q Ye, H Hu, C Fang… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Machine learning service API allows model owners to monetize proprietary models by
offering prediction services to third-party users. However, existing literature shows that …

Monitoring-based differential privacy mechanism against query flooding-based model extraction attack

H Yan, X Li, H Li, J Li, W Sun, F Li - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …

[PDF][PDF] Membership inference attack against differentially private deep learning model.

MA Rahman, T Rahman, R Laganière, N Mohammed… - Trans. Data Priv., 2018 - tdp.cat
The unprecedented success of deep learning is largely dependent on the availability of
massive amount of training data. In many cases, these data are crowd-sourced and may …

Defending privacy against more knowledgeable membership inference attackers

Y Yin, K Chen, L Shou, G Chen - Proceedings of the 27th ACM SIGKDD …, 2021 - dl.acm.org
Membership Inference Attack (MIA) in deep learning is a common form of privacy attack
which aims to infer whether a data sample is in a target classifier's training dataset or not …

Not one but many tradeoffs: Privacy vs. utility in differentially private machine learning

BZH Zhao, MA Kaafar, N Kourtellis - Proceedings of the 2020 ACM …, 2020 - dl.acm.org
Data holders are increasingly seeking to protect their user's privacy, whilst still maximizing
their ability to produce machine learning (ML) models with high quality predictions. In this …

Towards measuring membership privacy

Y Long, V Bindschaedler, CA Gunter - arXiv preprint arXiv:1712.09136, 2017 - arxiv.org
Machine learning models are increasingly made available to the masses through public
query interfaces. Recent academic work has demonstrated that malicious users who can …

One parameter defense—defending against data inference attacks via differential privacy

D Ye, S Shen, T Zhu, B Liu… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Machine learning models are vulnerable to data inference attacks, such as membership
inference and model inversion attacks. In these types of breaches, an adversary attempts to …

Broadening differential privacy for deep learning against model inversion attacks

Q Zhang, J Ma, Y Xiao, J Lou… - 2020 IEEE International …, 2020 - ieeexplore.ieee.org
Deep learning models have achieved great success in many real-world tasks such as image
recognition, machine translation, and self-driving cars. A large amount of data are needed to …

Increasing the cost of model extraction with calibrated proof of work

A Dziedzic, MA Kaleem, YS Lu, N Papernot - arXiv preprint arXiv …, 2022 - arxiv.org
In model extraction attacks, adversaries can steal a machine learning model exposed via a
public API by repeatedly querying it and adjusting their own model based on obtained …