SoK: Comparing Different Membership Inference Attacks with a Comprehensive Benchmark

J Niu, X Zhu, M Zeng, G Zhang, Q Zhao… - arXiv preprint arXiv …, 2023 - arxiv.org
Membership inference (MI) attacks threaten user privacy through determining if a given data
example has been used to train a target model. However, it has been increasingly …

Disparate Vulnerability in Link Inference Attacks against Graph Neural Networks

D Zhong, R Yu, K Wu, X Wang, J Xu… - Proceedings on Privacy …, 2023 - petsymposium.org
Graph Neural Networks (GNNs) have been widely used in various graph-based
applications. Recent studies have shown that GNNs are vulnerable to link-level membership …

[PDF][PDF] 机器学习中成员推理攻击和防御研究综述

牛俊, 马骁骥, 陈颖, 张歌, 何志鹏, 侯哲贤… - Journal of Cyber …, 2022 - jcs.iie.ac.cn
摘要机器学习被广泛应用于各个领域, 已成为推动各行业革命的强大动力,
极大促进了人工智能的繁荣与发展. 同时, 机器学习模型的训练和预测均需要大量数据 …

Improving Robustness to Model Inversion Attacks via Sparse Coding Architectures

SV Dibbo, A Breuer, J Moore, M Teti - arXiv preprint arXiv:2403.14772, 2024 - arxiv.org
Recent model inversion attack algorithms permit adversaries to reconstruct a neural
network's private training data just by repeatedly querying the network and inspecting its …

Sparse-Guard: Sparse Coding-Based Defense against Model Inversion Attacks

SV Dibbo, A Breuer, J Moore, M Teti - 2023 - openreview.net
In this paper, we study neural network architectures that are robust to model inversion
attacks. It is well-known that standard network architectures are vulnerable to model …

On the Vulnerability of Data Points under Multiple Membership Inference Attacks and Target Models

M Conti, J Li, S Picek - arXiv preprint arXiv:2210.16258, 2022 - arxiv.org
Membership Inference Attacks (MIAs) infer whether a data point is in the training data of a
machine learning model. It is a threat while being in the training data is private information of …

Resisting Membership Inference Attacks by Dynamically Adjusting Loss Targets

X Ma, Y Tian, Z Ding - 2023 International Conference on …, 2023 - ieeexplore.ieee.org
Machine learning (ML) models are susceptible to membership inference attacks (MIAs),
which aim to infer whether a particular sample was involved in model training. Previous …

Evaluating the Impact of Adversarial Factors on Membership Inference Attacks

B Niu, J Sun, Y Chen, L Zhang, J Cao… - 2023 IEEE Smart …, 2023 - ieeexplore.ieee.org
Existing works have demonstrated that machine learning models may leak sensitive
information of the training set to adversaries who launch the membership inference attacks …

Critical Analysis of Privacy Risks in Machine Learning and Implications for Use of Health Data: A systematic review and meta-analysis on membership inference …

EV Walker, J Bu, M Pakseresht, M Wickham, L Shack… - 2023 - researchsquare.com
Purpose. Machine learning (ML) has revolutionized data processing and analysis, with
applications in health showing great promise. However, ML poses privacy risks, as models …

Advancing Social Network Analytics: Resilience and Security

P Tricomi - 2024 - research.unipd.it
In the digital age, Online Social Networks (OSNs) have emerged as epicenters of human
interaction, facilitating the creation, sharing, and dissemination of information at an …