[引用][C] The unintended consequences of overfitting: Training data inference attacks

S Yeom, M Fredrikson, S Jha - arXiv preprint arXiv:1709.01604, 2017 - CoRR

Practical blind membership inference attack via differential comparisons

B Hui, Y Yang, H Yuan, P Burlina, NZ Gong… - arXiv preprint arXiv …, 2021 - arxiv.org
Membership inference (MI) attacks affect user privacy by inferring whether given data
samples have been used to train a target learning model, eg, a deep neural network. There …

MIAShield: Defending membership inference attacks via preemptive exclusion of members

I Jarin, B Eshete - arXiv preprint arXiv:2203.00915, 2022 - arxiv.org
In membership inference attacks (MIAs), an adversary observes the predictions of a model to
determine whether a sample is part of the model's training data. Existing MIA defenses …

On the effectiveness of regularization against membership inference attacks

Y Kaya, S Hong, T Dumitras - arXiv preprint arXiv:2006.05336, 2020 - arxiv.org
Deep learning models often raise privacy concerns as they leak information about their
training data. This enables an adversary to determine whether a data point was in a model's …

POSTER: Double-Dip: Thwarting Label-Only Membership Inference Attacks with Transfer Learning and Randomization

A Rajabi, R Pimple, A Janardhanan, S Asokraj… - Proceedings of the 19th …, 2024 - dl.acm.org
Transfer learning (TL) has been demonstrated to improve DNN model performance when
faced with a scarcity of training samples. However, the suitability of TL as a solution to …

Quantifying membership inference vulnerability via generalization gap and other model metrics

JW Bentley, D Gibney, G Hoppenworth… - arXiv preprint arXiv …, 2020 - arxiv.org
We demonstrate how a target model's generalization gap leads directly to an effective
deterministic black box membership inference attack (MIA). This provides an upper bound …

[引用][C] Towards the infeasibility of membership inference on deep models

S Rezaei, X Liu - arXiv preprint arXiv:2005.13702, 2020

A Method to Facilitate Membership Inference Attacks in Deep Learning Models

Z Chen, K Pattabiraman - arXiv preprint arXiv:2407.01919, 2024 - arxiv.org
Modern machine learning (ML) ecosystems offer a surging number of ML frameworks and
code repositories that can greatly facilitate the development of ML models. Today, even …

Resisting membership inference attacks through knowledge distillation

J Zheng, Y Cao, H Wang - Neurocomputing, 2021 - Elsevier
Recently, membership inference attacks (MIAs) against machine learning models have been
proposed. Using MIAs, adversaries can inference whether a data record is in the training set …

Ml privacy meter: Aiding regulatory compliance by quantifying the privacy risks of machine learning

SK Murakonda, R Shokri - arXiv preprint arXiv:2007.09339, 2020 - arxiv.org
When building machine learning models using sensitive data, organizations should ensure
that the data processed in such systems is adequately protected. For projects involving …