Wild patterns reloaded: A survey of machine learning security against training data poisoning

AE Cinà, K Grosse, A Demontis, S Vascon… - ACM Computing …, 2023 - dl.acm.org
The success of machine learning is fueled by the increasing availability of computing power
and large training datasets. The training data is used to learn new models or update existing …

Graph-based semi-supervised learning: A comprehensive review

Z Song, X Yang, Z Xu, I King - IEEE Transactions on Neural …, 2022 - ieeexplore.ieee.org
Semi-supervised learning (SSL) has tremendous value in practice due to the utilization of
both labeled and unlabelled data. An essential class of SSL methods, referred to as graph …

Graph neural networks: foundation, frontiers and applications

L Wu, P Cui, J Pei, L Zhao, X Guo - … of the 28th ACM SIGKDD Conference …, 2022 - dl.acm.org
The field of graph neural networks (GNNs) has seen rapid and incredible strides over the
recent years. Graph neural networks, also known as deep learning on graphs, graph …

Graph structure learning for robust graph neural networks

W Jin, Y Ma, X Liu, X Tang, S Wang… - Proceedings of the 26th …, 2020 - dl.acm.org
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
However, recent studies show that GNNs are vulnerable to carefully-crafted perturbations …

Gnnguard: Defending graph neural networks against adversarial attacks

X Zhang, M Zitnik - Advances in neural information …, 2020 - proceedings.neurips.cc
Deep learning methods for graphs achieve remarkable performance on many tasks.
However, despite the proliferation of such methods and their success, recent findings …

Poisoning and backdooring contrastive learning

N Carlini, A Terzis - arXiv preprint arXiv:2106.09667, 2021 - arxiv.org
Multimodal contrastive learning methods like CLIP train on noisy and uncurated training
datasets. This is cheaper than labeling datasets manually, and even improves out-of …

Adversarial attacks and defenses on graphs

W Jin, Y Li, H Xu, Y Wang, S Ji, C Aggarwal… - ACM SIGKDD …, 2021 - dl.acm.org
Adversarial Attacks and Defenses on Graphs Page 1 Adversarial Attacks and Defenses on
Graphs: A Review, A Tool and Empirical Studies Wei Jin†, Yaxin Li†, Han Xu†, Yiqi Wang† …

Rethinking the trigger of backdoor attack

Y Li, T Zhai, B Wu, Y Jiang, Z Li, S Xia - arXiv preprint arXiv:2004.04692, 2020 - arxiv.org
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs),
such that the prediction of the infected model will be maliciously changed if the hidden …

Graph backdoor

Z Xi, R Pang, S Ji, T Wang - 30th USENIX Security Symposium (USENIX …, 2021 - usenix.org
One intriguing property of deep neural networks (DNNs) is their inherent vulnerability to
backdoor attacks—a trojan model responds to trigger-embedded inputs in a highly …

Poisoning the unlabeled dataset of {Semi-Supervised} learning

N Carlini - 30th USENIX Security Symposium (USENIX Security …, 2021 - usenix.org
Semi-supervised machine learning models learn from a (small) set of labeled training
examples, and a (large) set of unlabeled training examples. State-of-the-art models can …