Membership inference attacks on machine learning: A survey

H Hu, Z Salcic, L Sun, G Dobbie, PS Yu… - ACM Computing Surveys …, 2022 - dl.acm.org
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …

A survey of privacy attacks in machine learning

M Rigaki, S Garcia - ACM Computing Surveys, 2023 - dl.acm.org
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …

Membership inference attacks from first principles

N Carlini, S Chien, M Nasr, S Song… - … IEEE Symposium on …, 2022 - ieeexplore.ieee.org
A membership inference attack allows an adversary to query a trained machine learning
model to predict whether or not a particular example was contained in the model's training …

Enhanced membership inference attacks against machine learning models

J Ye, A Maddi, SK Murakonda… - Proceedings of the …, 2022 - dl.acm.org
How much does a machine learning algorithm leak about its training data, and why?
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …

Are diffusion models vulnerable to membership inference attacks?

J Duan, F Kong, S Wang, X Shi… - … Conference on Machine …, 2023 - proceedings.mlr.press
Diffusion-based generative models have shown great potential for image synthesis, but
there is a lack of research on the security and privacy risks they may pose. In this paper, we …

[HTML][HTML] Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives

P Liu, X Xu, W Wang - Cybersecurity, 2022 - Springer
Abstract Empirical attacks on Federated Learning (FL) systems indicate that FL is fraught
with numerous attack surfaces throughout the FL execution. These attacks can not only …

Adversary instantiation: Lower bounds for differentially private machine learning

M Nasr, S Songi, A Thakurta… - … IEEE Symposium on …, 2021 - ieeexplore.ieee.org
Differentially private (DP) machine learning allows us to train models on private data while
limiting data leakage. DP formalizes this data leakage through a cryptographic game, where …

Membership leakage in label-only exposures

Z Li, Y Zhang - Proceedings of the 2021 ACM SIGSAC Conference on …, 2021 - dl.acm.org
Machine learning (ML) has been widely adopted in various privacy-critical applications, eg,
face recognition and medical image analysis. However, recent research has shown that ML …

Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment

Y Liu, Y Yao, JF Ton, X Zhang, RGH Cheng… - arXiv preprint arXiv …, 2023 - arxiv.org
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …

Scalable extraction of training data from (production) language models

M Nasr, N Carlini, J Hayase, M Jagielski… - arXiv preprint arXiv …, 2023 - arxiv.org
This paper studies extractable memorization: training data that an adversary can efficiently
extract by querying a machine learning model without prior knowledge of the training …