The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey

J Vatter, R Mayer, HA Jacobsen - ACM Computing Surveys, 2023 - dl.acm.org
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …

A review on machine unlearning

H Zhang, T Nakamura, T Isohara, K Sakurai - SN Computer Science, 2023 - Springer
Recently, an increasing number of laws have governed the useability of users' privacy. For
example, Article 17 of the General Data Protection Regulation (GDPR), the right to be …

A survey of machine unlearning

TT Nguyen, TT Huynh, PL Nguyen, AWC Liew… - arXiv preprint arXiv …, 2022 - arxiv.org
Today, computer systems hold large amounts of personal data. Yet while such an
abundance of data allows breakthroughs in artificial intelligence, and especially machine …

Towards unbounded machine unlearning

M Kurmanji, P Triantafillou, J Hayes… - Advances in neural …, 2024 - proceedings.neurips.cc
Deep machine unlearning is the problem of'removing'from a trained neural network a subset
of its training set. This problem is very timely and has many applications, including the key …

Rethinking machine unlearning for large language models

S Liu, Y Yao, J Jia, S Casper, N Baracaldo… - arXiv preprint arXiv …, 2024 - arxiv.org
We explore machine unlearning (MU) in the domain of large language models (LLMs),
referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence …

Salun: Empowering machine unlearning via gradient-based weight saliency in both image classification and generation

C Fan, J Liu, Y Zhang, E Wong, D Wei, S Liu - arXiv preprint arXiv …, 2023 - arxiv.org
With evolving data regulations, machine unlearning (MU) has become an important tool for
fostering trust and safety in today's AI models. However, existing MU methods focusing on …

On the necessity of auditable algorithmic definitions for machine unlearning

A Thudi, H Jia, I Shumailov, N Papernot - 31st USENIX Security …, 2022 - usenix.org
Machine unlearning, ie having a model forget about some of its training data, has become
increasingly more important as privacy legislation promotes variants of the right-to-be …

The privacy onion effect: Memorization is relative

N Carlini, M Jagielski, C Zhang… - Advances in …, 2022 - proceedings.neurips.cc
Abstract Machine learning models trained on private datasets have been shown to leak their
private data. Recent work has found that the average data point is rarely leaked---it is often …

Large language model unlearning

Y Yao, X Xu, Y Liu - arXiv preprint arXiv:2310.10683, 2023 - arxiv.org
We study how to perform unlearning, ie forgetting undesirable (mis) behaviors, on large
language models (LLMs). We show at least three scenarios of aligning LLMs with human …

Fast federated machine unlearning with nonlinear functional theory

T Che, Y Zhou, Z Zhang, L Lyu, J Liu… - International …, 2023 - proceedings.mlr.press
Federated machine unlearning (FMU) aims to remove the influence of a specified subset of
training data upon request from a trained federated learning model. Despite achieving …