Membership inference attacks on machine learning: A survey

H Hu, Z Salcic, L Sun, G Dobbie, PS Yu… - ACM Computing Surveys …, 2022 - dl.acm.org
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …

A comprehensive survey on trustworthy graph neural networks: Privacy, robustness, fairness, and explainability

E Dai, T Zhao, H Zhu, J Xu, Z Guo, H Liu, J Tang… - Machine Intelligence …, 2024 - Springer
Graph neural networks (GNNs) have made rapid developments in the recent years. Due to
their great ability in modeling graph-structured data, GNNs are vastly used in various …

Simple contrastive graph clustering

Y Liu, X Yang, S Zhou, X Liu, S Wang… - … on Neural Networks …, 2023 - ieeexplore.ieee.org
Contrastive learning has recently attracted plenty of attention in deep graph clustering due to
its promising performance. However, complicated data augmentations and time-consuming …

Trustworthy graph neural networks: Aspects, methods and trends

H Zhang, B Wu, X Yuan, S Pan, H Tong… - arXiv preprint arXiv …, 2022 - arxiv.org
Graph neural networks (GNNs) have emerged as a series of competent graph learning
methods for diverse real-world scenarios, ranging from daily applications like …

SoK: Let the privacy games begin! A unified treatment of data inference privacy in machine learning

A Salem, G Cherubin, D Evans, B Köpf… - … IEEE Symposium on …, 2023 - ieeexplore.ieee.org
Deploying machine learning models in production may allow adversaries to infer sensitive
information about training data. There is a vast literature analyzing different types of …

Membership inference attacks against text-to-image generation models

Y Wu, N Yu, Z Li, M Backes, Y Zhang - 2022 - openreview.net
Text-to-image generation models have recently attracted unprecedented attention as they
unlatch imaginative applications in all areas of life. However, developing such models …

Demystifying uneven vulnerability of link stealing attacks against graph neural networks

H Zhang, B Wu, S Wang, X Yang… - International …, 2023 - proceedings.mlr.press
While graph neural networks (GNNs) dominate the state-of-the-art for exploring graphs in
real-world applications, they have been shown to be vulnerable to a growing number of …

{GAP}: Differentially Private Graph Neural Networks with Aggregation Perturbation

S Sajadmanesh, AS Shamsabadi, A Bellet… - 32nd USENIX Security …, 2023 - usenix.org
In this paper, we study the problem of learning Graph Neural Networks (GNNs) with
Differential Privacy (DP). We propose a novel differentially private GNN based on …

Model extraction attacks on graph neural networks: Taxonomy and realisation

B Wu, X Yang, S Pan, X Yuan - Proceedings of the 2022 ACM on Asia …, 2022 - dl.acm.org
Machine learning models are shown to face a severe threat from Model Extraction Attacks,
where a well-trained private model owned by a service provider can be stolen by an attacker …

Idea: A flexible framework of certified unlearning for graph neural networks

Y Dong, B Zhang, Z Lei, N Zou, J Li - Proceedings of the 30th ACM …, 2024 - dl.acm.org
Graph Neural Networks (GNNs) have been increasingly deployed in a plethora of
applications. However, the graph data used for training may contain sensitive personal …