Defenses to membership inference attacks: A survey

L Hu, A Yan, H Yan, J Li, T Huang, Y Zhang… - ACM Computing …, 2023 - dl.acm.org
Machine learning (ML) has gained widespread adoption in a variety of fields, including
computer vision and natural language processing. However, ML models are vulnerable to …

[HTML][HTML] A survey on membership inference attacks and defenses in Machine Learning

J Niu, P Liu, X Zhu, K Shen, Y Wang, H Chi… - Journal of Information …, 2024 - Elsevier
Membership inference (MI) attacks mainly aim to infer whether a data record was used to
train a target model or not. Due to the serious privacy risks, MI attacks have been attracting a …

Boosting the transferability of adversarial attacks with adaptive points selecting in temporal neighborhood

H Zhu, H Zheng, Y Zhu, X Sui - Information Sciences, 2023 - Elsevier
Deep neural networks are highly susceptible to imperceptible noise, even to the human eye.
While high attack success rate has been achieved in white-box setting, the attack …

DGGI: Deep generative gradient inversion with diffusion model

L Wu, Z Liu, B Pu, K Wei, H Cao, S Yao - Information Fusion, 2024 - Elsevier
Federated learning is a privacy-preserving distributed framework that facilitates information
fusion and sharing among different clients, enabling the training of a global model without …

Rethinking Membership Inference Attacks Against Transfer Learning

C Wu, J Chen, Q Fang, K He, Z Zhao… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Transfer learning, successful in knowledge translation across related tasks, faces a
substantial privacy threat from membership inference attacks (MIAs). These attacks, despite …

[PDF][PDF] 机器学习中成员推理攻击和防御研究综述

牛俊, 马骁骥, 陈颖, 张歌, 何志鹏, 侯哲贤… - Journal of Cyber …, 2022 - jcs.iie.ac.cn
摘要机器学习被广泛应用于各个领域, 已成为推动各行业革命的强大动力,
极大促进了人工智能的繁荣与发展. 同时, 机器学习模型的训练和预测均需要大量数据 …

Regularization Mixup Adversarial Training: A Defense Strategy for Membership Privacy with Model Availability Assurance

Z Ding, Y Tian, G Wang, J Xiong - 2024 2nd International …, 2024 - ieeexplore.ieee.org
Neural network models face two highly destructive threats in real-world applications:
membership inference attacks (MIAs) and adversarial attacks (AAs). One compromises the …

A Member Inference Attack Defense Method Based on Differential Privacy and Data Enhancement

G Cui, L Ge, Y Zhao, T Fang - International Conference on Applied …, 2023 - Springer
The development of deep learning has brought about the business model of Machine
Learning as a Service (MLaaS). Malicious users can infer whether a member has …