Shapr: An efficient and versatile membership privacy risk metric for machine learning

V Duddu, S Szyller, N Asokan - arXiv preprint arXiv:2112.02230, 2021 - arxiv.org
Data used to train machine learning (ML) models can be sensitive. Membership inference
attacks (MIAs), attempting to determine whether a particular data record was used to train an …

Overconfidence is a dangerous thing: Mitigating membership inference attacks by enforcing less confident prediction

Z Chen, K Pattabiraman - arXiv preprint arXiv:2307.01610, 2023 - arxiv.org
Machine learning (ML) models are vulnerable to membership inference attacks (MIAs),
which determine whether a given input is used for training the target model. While there …

Membership privacy for machine learning models through knowledge transfer

V Shejwalkar, A Houmansadr - Proceedings of the AAAI conference on …, 2021 - ojs.aaai.org
Large capacity machine learning (ML) models are prone to membership inference attacks
(MIAs), which aim to infer whether the target sample is a member of the target model's …

Mitigating membership inference attacks by {Self-Distillation} through a novel ensemble architecture

X Tang, S Mahloujifar, L Song, V Shejwalkar… - 31st USENIX Security …, 2022 - usenix.org
Membership inference attacks are a key measure to evaluate privacy leakage in machine
learning (ML) models. It is important to train ML models that have high membership privacy …

Dissecting membership inference risk in machine learning

N Senavirathne, V Torra - … 13th International Symposium, CSS 2021, Virtual …, 2022 - Springer
Membership inference attacks (MIA) have been identified as a distinct threat to privacy when
sensitive personal data are used to train the machine learning (ML) models. This work is …

Can Membership Inferencing be Refuted?

Z Kong, AR Chowdhury, K Chaudhuri - arXiv preprint arXiv:2303.03648, 2023 - arxiv.org
Membership inference (MI) attack is currently the most popular test for measuring privacy
leakage in machine learning models. Given a machine learning model, a data point and …

Ml privacy meter: Aiding regulatory compliance by quantifying the privacy risks of machine learning

SK Murakonda, R Shokri - arXiv preprint arXiv:2007.09339, 2020 - arxiv.org
When building machine learning models using sensitive data, organizations should ensure
that the data processed in such systems is adequately protected. For projects involving …

Understanding disparate effects of membership inference attacks and their countermeasures

D Zhong, H Sun, J Xu, N Gong, WH Wang - … of the 2022 ACM on Asia …, 2022 - dl.acm.org
Machine learning algorithms, when applied to sensitive data, can pose severe threats to
privacy. A growing body of prior work has demonstrated that membership inference attack …

How to combine membership-inference attacks on multiple updated machine learning models

M Jagielski, S Wu, A Oprea, J Ullman… - … on Privacy Enhancing …, 2023 - petsymposium.org
A large body of research has shown that machine learning models are vulnerable to
membership inference (MI) attacks that violate the privacy of the participants in the training …

Effects of differential privacy and data skewness on membership inference vulnerability

S Truex, L Liu, ME Gursoy, W Wei… - 2019 First IEEE …, 2019 - ieeexplore.ieee.org
Membership inference attacks seek to infer the membership of individual training instances
of a privately trained model. This paper presents a membership privacy analysis and …