Ml privacy meter: Aiding regulatory compliance by quantifying the privacy risks of machine learning

SK Murakonda, R Shokri - arXiv preprint arXiv:2007.09339, 2020 - arxiv.org
When building machine learning models using sensitive data, organizations should ensure
that the data processed in such systems is adequately protected. For projects involving …

Demystifying membership inference attacks in machine learning as a service

S Truex, L Liu, ME Gursoy, L Yu… - IEEE transactions on …, 2019 - ieeexplore.ieee.org
Membership inference attacks seek to infer membership of individual training instances of a
model to which an adversary has black-box access through a machine learning-as-a-service …

Understanding and defending against White-box membership inference attack in deep learning

D Wu, S Qi, Y Qi, Q Li, B Cai, Q Guo, J Cheng - Knowledge-Based Systems, 2023 - Elsevier
Membership inference attacks (MIA) exploit the fact that deep learning algorithms leak
information about their training data through the learned model. It has been treated as an …

Sampling attacks: Amplification of membership inference attacks by repeated queries

S Rahimian, T Orekondy, M Fritz - arXiv preprint arXiv:2009.00395, 2020 - arxiv.org
Machine learning models have been shown to leak information violating the privacy of their
training set. We focus on membership inference attacks on machine learning models which …

Privacy-preserving in defending against membership inference attacks

Z Ying, Y Zhang, X Liu - Proceedings of the 2020 Workshop on Privacy …, 2020 - dl.acm.org
The membership inference attack refers to the attacker's purpose to infer whether the data
sample is in the target classifier training dataset. The ability of an adversary to ascertain the …

Are diffusion models vulnerable to membership inference attacks?

J Duan, F Kong, S Wang, X Shi… - … Conference on Machine …, 2023 - proceedings.mlr.press
Diffusion-based generative models have shown great potential for image synthesis, but
there is a lack of research on the security and privacy risks they may pose. In this paper, we …

Stolen memories: Leveraging model memorization for calibrated {White-Box} membership inference

K Leino, M Fredrikson - 29th USENIX security symposium (USENIX …, 2020 - usenix.org
Membership inference (MI) attacks exploit the fact that machine learning algorithms
sometimes leak information about their training data through the learned model. In this work …

Neuguard: Lightweight neuron-guided defense against membership inference attacks

N Xu, B Wang, R Ran, W Wen… - Proceedings of the 38th …, 2022 - dl.acm.org
Membership inference attacks (MIAs) against machine learning models lead to serious
privacy risks for the training dataset used in the model training. The state-of-the-art defenses …

Knowledge cross-distillation for membership privacy

R Chourasia, B Enkhtaivan, K Ito, J Mori… - arXiv preprint arXiv …, 2021 - arxiv.org
A membership inference attack (MIA) poses privacy risks for the training data of a machine
learning model. With an MIA, an attacker guesses if the target data are a member of the …

SeqMIA: Sequential-Metric Based Membership Inference Attack

H Li, Z Li, S Wu, C Hu, Y Ye, M Zhang, D Feng… - arXiv preprint arXiv …, 2024 - arxiv.org
Most existing membership inference attacks (MIAs) utilize metrics (eg, loss) calculated on
the model's final state, while recent advanced attacks leverage metrics computed at various …