Monitoring-based differential privacy mechanism against query flooding-based model extraction attack

H Yan, X Li, H Li, J Li, W Sun, F Li - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …

BDPL: A boundary differentially private layer against machine learning model extraction attacks

H Zheng, Q Ye, H Hu, C Fang, J Shi - … 23–27, 2019, Proceedings, Part I 24, 2019 - Springer
Abstract Machine learning models trained by large volume of proprietary data and intensive
computational resources are valuable assets of their owners, who merchandise these …

Protecting decision boundary of machine learning model with differentially private perturbation

H Zheng, Q Ye, H Hu, C Fang… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Machine learning service API allows model owners to monetize proprietary models by
offering prediction services to third-party users. However, existing literature shows that …

Protecting regression models with personalized local differential privacy

X Li, H Yan, Z Cheng, W Sun… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
The equation-solving model extraction attack is an intuitively simple but devastating attack to
steal confidential information of regression models through a sufficient number of queries …

One parameter defense—defending against data inference attacks via differential privacy

D Ye, S Shen, T Zhu, B Liu… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Machine learning models are vulnerable to data inference attacks, such as membership
inference and model inversion attacks. In these types of breaches, an adversary attempts to …

Increasing the cost of model extraction with calibrated proof of work

A Dziedzic, MA Kaleem, YS Lu, N Papernot - arXiv preprint arXiv …, 2022 - arxiv.org
In model extraction attacks, adversaries can steal a machine learning model exposed via a
public API by repeatedly querying it and adjusting their own model based on obtained …

An attack-based evaluation method for differentially private learning against model inversion attack

C Park, D Hong, C Seo - IEEE Access, 2019 - ieeexplore.ieee.org
As the amount of data and computational power explosively increase, valuable results are
being created using machine learning techniques. In particular, models based on deep …

Defending privacy against more knowledgeable membership inference attackers

Y Yin, K Chen, L Shou, G Chen - Proceedings of the 27th ACM SIGKDD …, 2021 - dl.acm.org
Membership Inference Attack (MIA) in deep learning is a common form of privacy attack
which aims to infer whether a data sample is in a target classifier's training dataset or not …

Not one but many tradeoffs: Privacy vs. utility in differentially private machine learning

BZH Zhao, MA Kaafar, N Kourtellis - Proceedings of the 2020 ACM …, 2020 - dl.acm.org
Data holders are increasingly seeking to protect their user's privacy, whilst still maximizing
their ability to produce machine learning (ML) models with high quality predictions. In this …

Querysnout: Automating the discovery of attribute inference attacks against query-based systems

AM Cretu, F Houssiau, A Cully… - Proceedings of the 2022 …, 2022 - dl.acm.org
Although query-based systems (QBS) have become one of the main solutions to share data
anonymously, building QBSes that robustly protect the privacy of individuals contributing to …