[HTML][HTML] Overfitting, robustness, and malicious algorithms: A study of potential causes of privacy risk in machine learning

S Yeom, I Giacomelli, A Menaged… - Journal of …, 2020 - content.iospress.com
Abstract Machine learning algorithms, when applied to sensitive data, pose a distinct threat
to privacy. A growing body of prior work demonstrates that models produced by these …

Privacy risk in machine learning: Analyzing the connection to overfitting

S Yeom, I Giacomelli, M Fredrikson… - 2018 IEEE 31st …, 2018 - ieeexplore.ieee.org
Machine learning algorithms, when applied to sensitive data, pose a distinct threat to
privacy. A growing body of prior work demonstrates that models produced by these …

A survey of privacy attacks in machine learning

M Rigaki, S Garcia - ACM Computing Surveys, 2023 - dl.acm.org
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …

Label-only membership inference attacks

CA Choquette-Choo, F Tramer… - International …, 2021 - proceedings.mlr.press
Membership inference is one of the simplest privacy threats faced by machine learning
models that are trained on private sensitive data. In this attack, an adversary infers whether a …

Differential privacy defenses and sampling attacks for membership inference

S Rahimian, T Orekondy, M Fritz - … of the 14th ACM workshop on artificial …, 2021 - dl.acm.org
Machine learning models are commonly trained on sensitive and personal data such as
pictures, medical records, financial records, etc. A serious breach of the privacy of this …

SoK: Let the privacy games begin! A unified treatment of data inference privacy in machine learning

A Salem, G Cherubin, D Evans, B Köpf… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Deploying machine learning models in production may allow adversaries to infer sensitive
information about training data. There is a vast literature analyzing different types of …

Systematic evaluation of privacy risks of machine learning models

L Song, P Mittal - 30th USENIX Security Symposium (USENIX Security …, 2021 - usenix.org
Machine learning models are prone to memorizing sensitive data, making them vulnerable
to membership inference attacks in which an adversary aims to guess if an input sample was …

A critical overview of privacy in machine learning

E De Cristofaro - IEEE Security & Privacy, 2021 - ieeexplore.ieee.org
This article reviews privacy challenges in machine learning and provides a critical overview
of the relevant research literature. The possible adversarial models are discussed, a wide …

The privacy onion effect: Memorization is relative

N Carlini, M Jagielski, C Zhang… - Advances in …, 2022 - proceedings.neurips.cc
Abstract Machine learning models trained on private datasets have been shown to leak their
private data. Recent work has found that the average data point is rarely leaked---it is often …

Improving robustness to model inversion attacks via mutual information regularization

T Wang, Y Zhang, R Jia - Proceedings of the AAAI Conference on …, 2021 - ojs.aaai.org
This paper studies defense mechanisms against model inversion (MI) attacks--a type of
privacy attacks aimed at inferring information about the training data distribution given the …