Can Membership Inferencing be Refuted?

Z Kong, AR Chowdhury, K Chaudhuri - arXiv preprint arXiv:2303.03648, 2023 - arxiv.org
Membership inference (MI) attack is currently the most popular test for measuring privacy
leakage in machine learning models. Given a machine learning model, a data point and …

Range Membership Inference Attacks

J Tao, R Shokri - arXiv preprint arXiv:2408.05131, 2024 - arxiv.org
Machine learning models can leak private information about their training data, but the
standard methods to measure this risk, based on membership inference attacks (MIAs), have …

Label-only membership inference attacks

CA Choquette-Choo, F Tramer… - International …, 2021 - proceedings.mlr.press
Membership inference is one of the simplest privacy threats faced by machine learning
models that are trained on private sensitive data. In this attack, an adversary infers whether a …

Mitigating membership inference attacks by {Self-Distillation} through a novel ensemble architecture

X Tang, S Mahloujifar, L Song, V Shejwalkar… - 31st USENIX Security …, 2022 - usenix.org
Membership inference attacks are a key measure to evaluate privacy leakage in machine
learning (ML) models. It is important to train ML models that have high membership privacy …

Shapr: An efficient and versatile membership privacy risk metric for machine learning

V Duddu, S Szyller, N Asokan - arXiv preprint arXiv:2112.02230, 2021 - arxiv.org
Data used to train machine learning (ML) models can be sensitive. Membership inference
attacks (MIAs), attempting to determine whether a particular data record was used to train an …

HP-MIA: A novel membership inference attack scheme for high membership prediction precision

S Chen, W Wang, Y Zhong, Z Ying, W Tang, Z Pan - Computers & Security, 2024 - Elsevier
Abstract Membership Inference Attacks (MIAs) have been considered as one of the major
privacy threats in recent years, especially in machine learning models. Most canonical MIAs …

How to combine membership-inference attacks on multiple updated machine learning models

M Jagielski, S Wu, A Oprea, J Ullman… - … on Privacy Enhancing …, 2023 - petsymposium.org
A large body of research has shown that machine learning models are vulnerable to
membership inference (MI) attacks that violate the privacy of the participants in the training …

Sampling attacks: Amplification of membership inference attacks by repeated queries

S Rahimian, T Orekondy, M Fritz - arXiv preprint arXiv:2009.00395, 2020 - arxiv.org
Machine learning models have been shown to leak information violating the privacy of their
training set. We focus on membership inference attacks on machine learning models which …

Systematic evaluation of privacy risks of machine learning models

L Song, P Mittal - 30th USENIX Security Symposium (USENIX Security …, 2021 - usenix.org
Machine learning models are prone to memorizing sensitive data, making them vulnerable
to membership inference attacks in which an adversary aims to guess if an input sample was …

How does data augmentation affect privacy in machine learning?

D Yu, H Zhang, W Chen, J Yin, TY Liu - Proceedings of the AAAI …, 2021 - ojs.aaai.org
It is observed in the literature that data augmentation can significantly mitigate membership
inference (MI) attack. However, in this work, we challenge this observation by proposing new …