A survey of privacy attacks in machine learning

M Rigaki, S Garcia - ACM Computing Surveys, 2023 - dl.acm.org
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …

I know what you trained last summer: A survey on stealing machine learning models and defences

D Oliynyk, R Mayer, A Rauber - ACM Computing Surveys, 2023 - dl.acm.org
Machine-Learning-as-a-Service (MLaaS) has become a widespread paradigm, making
even the most complex Machine Learning models available for clients via, eg, a pay-per …

Privacy side channels in machine learning systems

E Debenedetti, G Severi, N Carlini… - 33rd USENIX Security …, 2024 - usenix.org
Most current approaches for protecting privacy in machine learning (ML) assume that
models exist in a vacuum. Yet, in reality, these models are part of larger systems that include …

Dawn: Dynamic adversarial watermarking of neural networks

S Szyller, BG Atli, S Marchal, N Asokan - Proceedings of the 29th ACM …, 2021 - dl.acm.org
Training machine learning (ML) models is expensive in terms of computational power,
amounts of labeled data and human expertise. Thus, ML models constitute business value …

Students parrot their teachers: Membership inference on model distillation

M Jagielski, M Nasr, K Lee… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Model distillation is frequently proposed as a technique to reduce the privacy
leakage of machine learning. These empirical privacy defenses rely on the intuition that …

Defending against data-free model extraction by distributionally robust defensive training

Z Wang, L Shen, T Liu, T Duan, Y Zhu… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Data-Free Model Extraction (DFME) aims to clone a black-box model without
knowing its original training data distribution, making it much easier for attackers to steal …

Visual privacy attacks and defenses in deep learning: a survey

G Zhang, B Liu, T Zhu, A Zhou, W Zhou - Artificial Intelligence Review, 2022 - Springer
The concerns on visual privacy have been increasingly raised along with the dramatic
growth in image and video capture and sharing. Meanwhile, with the recent breakthrough in …

How to steer your adversary: Targeted and efficient model stealing defenses with gradient redirection

M Mazeika, B Li, D Forsyth - International conference on …, 2022 - proceedings.mlr.press
Abstract Model stealing attacks present a dilemma for public machine learning APIs. To
protect financial investments, companies may be forced to withhold important information …

Fully Exploiting Every Real Sample: SuperPixel Sample Gradient Model Stealing

Y Zhao, X Deng, Y Liu, X Pei, J Xia… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Model stealing (MS) involves querying and observing the output of a machine
learning model to steal its capabilities. The quality of queried data is crucial yet obtaining a …

[PDF][PDF] InverseNet: Augmenting Model Extraction Attacks with Training Data Inversion.

X Gong, Y Chen, W Yang, G Mei, Q Wang - IJCAI, 2021 - ijcai.org
Cloud service providers, including Google, Amazon, and Alibaba, have now launched
machinelearning-as-a-service (MLaaS) platforms, allowing clients to access sophisticated …