How to combine membership-inference attacks on multiple updated machine learning models

M Jagielski, S Wu, A Oprea, J Ullman… - … on Privacy Enhancing …, 2023 - petsymposium.org
A large body of research has shown that machine learning models are vulnerable to
membership inference (MI) attacks that violate the privacy of the participants in the training …

How to combine membership-inference attacks on multiple updated models

M Jagielski, S Wu, A Oprea, J Ullman… - arXiv preprint arXiv …, 2022 - arxiv.org
A large body of research has shown that machine learning models are vulnerable to
membership inference (MI) attacks that violate the privacy of the participants in the training …

Systematic evaluation of privacy risks of machine learning models

L Song, P Mittal - 30th USENIX Security Symposium (USENIX Security …, 2021 - usenix.org
Machine learning models are prone to memorizing sensitive data, making them vulnerable
to membership inference attacks in which an adversary aims to guess if an input sample was …

Mitigating membership inference attacks by {Self-Distillation} through a novel ensemble architecture

X Tang, S Mahloujifar, L Song, V Shejwalkar… - 31st USENIX Security …, 2022 - usenix.org
Membership inference attacks are a key measure to evaluate privacy leakage in machine
learning (ML) models. It is important to train ML models that have high membership privacy …

Gradient-leaks: Enabling black-box membership inference attacks against machine learning models

G Liu, T Xu, R Zhang, Z Wang… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Machine Learning (ML) techniques have been applied to many real-world applications to
perform a wide range of tasks. In practice, ML models are typically deployed as the black …

Shapr: An efficient and versatile membership privacy risk metric for machine learning

V Duddu, S Szyller, N Asokan - arXiv preprint arXiv:2112.02230, 2021 - arxiv.org
Data used to train machine learning (ML) models can be sensitive. Membership inference
attacks (MIAs), attempting to determine whether a particular data record was used to train an …

Can Membership Inferencing be Refuted?

Z Kong, AR Chowdhury, K Chaudhuri - arXiv preprint arXiv:2303.03648, 2023 - arxiv.org
Membership inference (MI) attack is currently the most popular test for measuring privacy
leakage in machine learning models. Given a machine learning model, a data point and …

Membership inference attacks: analysis and mitigation

MSR Shuvo, D Alhadidi - … on Trust, Security and Privacy in …, 2020 - ieeexplore.ieee.org
Given a machine learning model and a record, membership attacks determine whether this
record was used as part of the model's training dataset. Membership inference can present a …

Enhanced membership inference attacks against machine learning models

J Ye, A Maddi, SK Murakonda… - Proceedings of the …, 2022 - dl.acm.org
How much does a machine learning algorithm leak about its training data, and why?
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …

Label-only membership inference attacks

CA Choquette-Choo, F Tramer… - International …, 2021 - proceedings.mlr.press
Membership inference is one of the simplest privacy threats faced by machine learning
models that are trained on private sensitive data. In this attack, an adversary infers whether a …