A survey of privacy attacks in machine learning

M Rigaki, S Garcia - ACM Computing Surveys, 2023 - dl.acm.org
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …

I know what you trained last summer: A survey on stealing machine learning models and defences

D Oliynyk, R Mayer, A Rauber - ACM Computing Surveys, 2023 - dl.acm.org
Machine-Learning-as-a-Service (MLaaS) has become a widespread paradigm, making
even the most complex Machine Learning models available for clients via, eg, a pay-per …

Prediction poisoning: Towards defenses against dnn model stealing attacks

T Orekondy, B Schiele, M Fritz - arXiv preprint arXiv:1906.10908, 2019 - arxiv.org
High-performance Deep Neural Networks (DNNs) are increasingly deployed in many real-
world applications eg, cloud prediction APIs. Recent advances in model functionality …

Dawn: Dynamic adversarial watermarking of neural networks

S Szyller, BG Atli, S Marchal, N Asokan - Proceedings of the 29th ACM …, 2021 - dl.acm.org
Training machine learning (ML) models is expensive in terms of computational power,
amounts of labeled data and human expertise. Thus, ML models constitute business value …

LF-GDPR: A framework for estimating graph metrics with local differential privacy

Q Ye, H Hu, MH Au, X Meng… - IEEE Transactions on …, 2020 - ieeexplore.ieee.org
Local differential privacy (LDP) is an emerging technique for privacy-preserving data
collection without a trusted collector. Despite its strong privacy guarantee, LDP cannot be …

D-dae: Defense-penetrating model extraction attacks

Y Chen, R Guan, X Gong, J Dong… - 2023 IEEE Symposium …, 2023 - ieeexplore.ieee.org
Recent studies show that machine learning models are vulnerable to model extraction
attacks, where the adversary builds a substitute model that achieves almost the same …

Beyond value perturbation: Local differential privacy in the temporal setting

Q Ye, H Hu, N Li, X Meng, H Zheng… - IEEE INFOCOM 2021 …, 2021 - ieeexplore.ieee.org
Time series has numerous application scenarios. However, since many time series data are
personal data, releasing them directly could cause privacy infringement. All existing …

Model extraction attacks and defenses on cloud-based machine learning models

X Gong, Q Wang, Y Chen, W Yang… - IEEE Communications …, 2020 - ieeexplore.ieee.org
Machine learning models have achieved state-of-the-art performance in various fields, from
image classification to speech recognition. However, such models are trained with a large …

PrivKVM*: Revisiting key-value statistics estimation with local differential privacy

Q Ye, H Hu, X Meng, H Zheng, K Huang… - … on Dependable and …, 2021 - ieeexplore.ieee.org
A key factor in big data analytics and artificial intelligence is the collection of user data from a
large population. However, the collection of user data comes at the price of privacy risks, not …

Monitoring-based differential privacy mechanism against query flooding-based model extraction attack

H Yan, X Li, H Li, J Li, W Sun, F Li - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
Public intelligent services enabled by machine learning algorithms are vulnerable to model
extraction attacks that can steal confidential information of the learning models through …