Cross-context backdoor attacks against graph prompt learning

X Lyu, Y Han, W Wang, H Qian, I Tsang… - Proceedings of the 30th …, 2024 - dl.acm.org
Graph Prompt Learning (GPL) bridges significant disparities between pretraining and
downstream applications to alleviate the knowledge transfer bottleneck in real-world graph …

Certifiably robust graph contrastive learning

M Lin, T Xiao, E Dai, X Zhang… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Graph Contrastive Learning (GCL) has emerged as a popular unsupervised graph
representation learning method. However, it has been shown that GCL is vulnerable to …

Backdoor graph condensation

J Wu, N Lu, Z Dai, W Fan, S Liu, Q Li, K Tang - arXiv preprint arXiv …, 2024 - arxiv.org
Recently, graph condensation has emerged as a prevalent technique to improve the training
efficiency for graph neural networks (GNNs). It condenses a large graph into a small one …

A clean-label graph backdoor attack method in node classification task

X Xing, M Xu, Y Bai, D Yang - Knowledge-Based Systems, 2024 - Elsevier
Backdoor attacks in the traditional graph neural networks (GNNs) field are easily detectable
due to the dilemma of confusing labels. To explore the backdoor vulnerability of GNNs and …

Globally Interpretable Graph Learning via Distribution Matching

Y Nian, Y Chang, W Jin, L Lin - Proceedings of the ACM on Web …, 2024 - dl.acm.org
Graph neural networks (GNNs) have emerged as a powerful model to capture critical graph
patterns. Instead of treating them as black boxes in an end-to-end fashion, attempts are …

On the robustness of graph reduction against GNN backdoor

Y Zhu, M Mandulak, K Wu, G Slota, Y Jeon… - Proceedings of the …, 2024 - dl.acm.org
Graph Neural Networks (GNNs) have been shown to be susceptible to backdoor poisoning
attacks, which pose serious threats to real-world applications. Meanwhile, graph reduction …

Trojan Prompt Attacks on Graph Neural Networks

M Lin, Z Zhang, E Dai, Z Wu, Y Wang, X Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Graph Prompt Learning (GPL) has been introduced as a promising approach that uses
prompts to adapt pre-trained GNN models to specific downstream tasks without requiring …

DMGNN: Detecting and Mitigating Backdoor Attacks in Graph Neural Networks

H Sui, B Chen, J Zhang, C Zhu, D Wu, Q Lu… - arXiv preprint arXiv …, 2024 - arxiv.org
Recent studies have revealed that GNNs are highly susceptible to multiple adversarial
attacks. Among these, graph backdoor attacks pose one of the most prominent threats …

A Survey on Self-Supervised Pre-Training of Graph Foundation Models: A Knowledge-Based Perspective

Z Zhao, Y Li, Y Zou, R Li, R Zhang - arXiv preprint arXiv:2403.16137, 2024 - arxiv.org
Graph self-supervised learning is now a go-to method for pre-training graph foundation
models, including graph neural networks, graph transformers, and more recent large …

Krait: A Backdoor Attack Against Graph Prompt Tuning

Y Song, R Singh, B Palanisamy - arXiv preprint arXiv:2407.13068, 2024 - arxiv.org
Graph prompt tuning has emerged as a promising paradigm to effectively transfer general
graph knowledge from pre-trained models to various downstream tasks, particularly in few …