Membership reconstruction attack in deep neural networks

Y Long, Z Ying, H Yan, R Fang, X Li, Y Wang, Z Pan - Information Sciences, 2023 - Elsevier
To further enhance the reliability of Machine Learning (ML) systems, considerable efforts
have been dedicated to developing privacy protection techniques. Recently, membership …

MIST: Defending Against Membership Inference Attacks Through Membership-Invariant Subspace Training

J Li, N Li, B Ribeiro - arXiv preprint arXiv:2311.00919, 2023 - arxiv.org
In Member Inference (MI) attacks, the adversary try to determine whether an instance is used
to train a machine learning (ML) model. MI attacks are a major privacy concern when using …

Understanding disparate effects of membership inference attacks and their countermeasures

D Zhong, H Sun, J Xu, N Gong, WH Wang - … of the 2022 ACM on Asia …, 2022 - dl.acm.org
Machine learning algorithms, when applied to sensitive data, can pose severe threats to
privacy. A growing body of prior work has demonstrated that membership inference attack …

Shapr: An efficient and versatile membership privacy risk metric for machine learning

V Duddu, S Szyller, N Asokan - arXiv preprint arXiv:2112.02230, 2021 - arxiv.org
Data used to train machine learning (ML) models can be sensitive. Membership inference
attacks (MIAs), attempting to determine whether a particular data record was used to train an …

Can Membership Inferencing be Refuted?

Z Kong, AR Chowdhury, K Chaudhuri - arXiv preprint arXiv:2303.03648, 2023 - arxiv.org
Membership inference (MI) attack is currently the most popular test for measuring privacy
leakage in machine learning models. Given a machine learning model, a data point and …

[HTML][HTML] Dual Defense: Combining Preemptive Exclusion of Members and Knowledge Distillation to Mitigate Membership Inference Attacks

J Niu, P Liu, C Huang, Y Zhang, M Zeng, K Shen… - Journal of Information …, 2024 - Elsevier
Membership inference (MI) attacks threaten user privacy through determining if a given data
example has been used to train a target model. Existing MI defenses protect the …

Membership inference attacks by exploiting loss trajectory

Y Liu, Z Zhao, M Backes, Y Zhang - Proceedings of the 2022 ACM …, 2022 - dl.acm.org
Machine learning models are vulnerable to membership inference attacks in which an
adversary aims to predict whether or not a particular sample was contained in the target …

On the difficulty of membership inference attacks

S Rezaei, X Liu - Proceedings of the IEEE/CVF Conference …, 2021 - openaccess.thecvf.com
Recent studies propose membership inference (MI) attacks on deep models, where the goal
is to infer if a sample has been used in the training process. Despite their apparent success …

Membership leakage in label-only exposures

Z Li, Y Zhang - Proceedings of the 2021 ACM SIGSAC Conference on …, 2021 - dl.acm.org
Machine learning (ML) has been widely adopted in various privacy-critical applications, eg,
face recognition and medical image analysis. However, recent research has shown that ML …

Investigating membership inference attacks under data dependencies

T Humphries, S Oya, L Tulloch… - 2023 IEEE 36th …, 2023 - ieeexplore.ieee.org
Training machine learning models on privacy-sensitive data has become a popular practice,
driving innovation in ever-expanding fields. This has opened the door to new attacks that …