Defenses to membership inference attacks: A survey

L Hu, A Yan, H Yan, J Li, T Huang, Y Zhang… - ACM Computing …, 2023 - dl.acm.org
Machine learning (ML) has gained widespread adoption in a variety of fields, including
computer vision and natural language processing. However, ML models are vulnerable to …

Membership inference attacks on machine learning: A survey

H Hu, Z Salcic, L Sun, G Dobbie, PS Yu… - ACM Computing Surveys …, 2022 - dl.acm.org
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …

Relaxloss: Defending membership inference attacks without losing utility

D Chen, N Yu, M Fritz - arXiv preprint arXiv:2207.05801, 2022 - arxiv.org
As a long-term threat to the privacy of training data, membership inference attacks (MIAs)
emerge ubiquitously in machine learning models. Existing works evidence strong …

Overconfidence is a dangerous thing: Mitigating membership inference attacks by enforcing less confident prediction

Z Chen, K Pattabiraman - arXiv preprint arXiv:2307.01610, 2023 - arxiv.org
Machine learning (ML) models are vulnerable to membership inference attacks (MIAs),
which determine whether a given input is used for training the target model. While there …

Membership privacy for machine learning models through knowledge transfer

V Shejwalkar, A Houmansadr - Proceedings of the AAAI conference on …, 2021 - ojs.aaai.org
Large capacity machine learning (ML) models are prone to membership inference attacks
(MIAs), which aim to infer whether the target sample is a member of the target model's …

Membership leakage in label-only exposures

Z Li, Y Zhang - Proceedings of the 2021 ACM SIGSAC Conference on …, 2021 - dl.acm.org
Machine learning (ML) has been widely adopted in various privacy-critical applications, eg,
face recognition and medical image analysis. However, recent research has shown that ML …

When does data augmentation help with membership inference attacks?

Y Kaya, T Dumitras - International conference on machine …, 2021 - proceedings.mlr.press
Deep learning models often raise privacy concerns as they leak information about their
training data. This leakage enables membership inference attacks (MIA) that can identify …

How does data augmentation affect privacy in machine learning?

D Yu, H Zhang, W Chen, J Yin, TY Liu - Proceedings of the AAAI …, 2021 - ojs.aaai.org
It is observed in the literature that data augmentation can significantly mitigate membership
inference (MI) attack. However, in this work, we challenge this observation by proposing new …

Membership-doctor: Comprehensive assessment of membership inference against machine learning models

X He, Z Li, W Xu, C Cornelius, Y Zhang - arXiv preprint arXiv:2208.10445, 2022 - arxiv.org
Machine learning models are prone to memorizing sensitive data, making them vulnerable
to membership inference attacks in which an adversary aims to infer whether an input …

Effects of differential privacy and data skewness on membership inference vulnerability

S Truex, L Liu, ME Gursoy, W Wei… - 2019 First IEEE …, 2019 - ieeexplore.ieee.org
Membership inference attacks seek to infer the membership of individual training instances
of a privately trained model. This paper presents a membership privacy analysis and …