Membership inference attacks on machine learning: A survey

H Hu, Z Salcic, L Sun, G Dobbie, PS Yu… - ACM Computing Surveys …, 2022 - dl.acm.org
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …

Adversarial interference and its mitigations in privacy-preserving collaborative machine learning

D Usynin, A Ziller, M Makowski, R Braren… - Nature Machine …, 2021 - nature.com
Despite the rapid increase of data available to train machine-learning algorithms in many
domains, several applications suffer from a paucity of representative and diverse data. The …

Membership inference attacks from first principles

N Carlini, S Chien, M Nasr, S Song… - … IEEE Symposium on …, 2022 - ieeexplore.ieee.org
A membership inference attack allows an adversary to query a trained machine learning
model to predict whether or not a particular example was contained in the model's training …

What does it mean for a language model to preserve privacy?

H Brown, K Lee, F Mireshghallah, R Shokri… - Proceedings of the 2022 …, 2022 - dl.acm.org
Natural language reflects our private lives and identities, making its privacy concerns as
broad as those of real life. Language models lack the ability to understand the context and …

Label-only membership inference attacks

CA Choquette-Choo, F Tramer… - International …, 2021 - proceedings.mlr.press
Membership inference is one of the simplest privacy threats faced by machine learning
models that are trained on private sensitive data. In this attack, an adversary infers whether a …

Privacy and security issues in deep learning: A survey

X Liu, L Xie, Y Wang, J Zou, J Xiong, Z Ying… - IEEE …, 2020 - ieeexplore.ieee.org
Deep Learning (DL) algorithms based on artificial neural networks have achieved
remarkable success and are being extensively applied in a variety of application domains …

Memguard: Defending against black-box membership inference attacks via adversarial examples

J Jia, A Salem, M Backes, Y Zhang… - Proceedings of the 2019 …, 2019 - dl.acm.org
In a membership inference attack, an attacker aims to infer whether a data sample is in a
target classifier's training dataset or not. Specifically, given a black-box access to the target …

Evaluating differentially private machine learning in practice

B Jayaraman, D Evans - 28th USENIX Security Symposium (USENIX …, 2019 - usenix.org
Differential privacy is a strong notion for privacy that can be used to prove formal
guarantees, in terms of a privacy budget, ε, about how much information is leaked by a …

Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models

A Salem, Y Zhang, M Humbert, P Berrang… - arXiv preprint arXiv …, 2018 - arxiv.org
Machine learning (ML) has become a core component of many real-world applications and
training data is a key factor that drives current progress. This huge success has led Internet …

Membership leakage in label-only exposures

Z Li, Y Zhang - Proceedings of the 2021 ACM SIGSAC Conference on …, 2021 - dl.acm.org
Machine learning (ML) has been widely adopted in various privacy-critical applications, eg,
face recognition and medical image analysis. However, recent research has shown that ML …