The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey

J Vatter, R Mayer, HA Jacobsen - ACM Computing Surveys, 2023 - dl.acm.org
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …

Membership inference attacks on machine learning: A survey

H Hu, Z Salcic, L Sun, G Dobbie, PS Yu… - ACM Computing Surveys …, 2022 - dl.acm.org
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …

Extracting training data from diffusion models

N Carlini, J Hayes, M Nasr, M Jagielski… - 32nd USENIX Security …, 2023 - usenix.org
Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted
significant attention due to their ability to generate high-quality synthetic images. In this work …

Quantifying memorization across neural language models

N Carlini, D Ippolito, M Jagielski, K Lee… - arXiv preprint arXiv …, 2022 - arxiv.org
Large language models (LMs) have been shown to memorize parts of their training data,
and when prompted appropriately, they will emit the memorized training data verbatim. This …

Membership inference attacks from first principles

N Carlini, S Chien, M Nasr, S Song… - … IEEE Symposium on …, 2022 - ieeexplore.ieee.org
A membership inference attack allows an adversary to query a trained machine learning
model to predict whether or not a particular example was contained in the model's training …

Scalable extraction of training data from (production) language models

M Nasr, N Carlini, J Hayase, M Jagielski… - arXiv preprint arXiv …, 2023 - arxiv.org
This paper studies extractable memorization: training data that an adversary can efficiently
extract by querying a machine learning model without prior knowledge of the training …

Extracting training data from large language models

N Carlini, F Tramer, E Wallace, M Jagielski… - 30th USENIX Security …, 2021 - usenix.org
It has become common to publish large (billion parameter) language models that have been
trained on private datasets. This paper demonstrates that in such settings, an adversary can …

Generative adversarial networks: A survey toward private and secure applications

Z Cai, Z Xiong, H Xu, P Wang, W Li, Y Pan - ACM Computing Surveys …, 2021 - dl.acm.org
Generative Adversarial Networks (GANs) have promoted a variety of applications in
computer vision and natural language processing, among others, due to its generative …

Reconstructing training data from trained neural networks

N Haim, G Vardi, G Yehudai… - Advances in Neural …, 2022 - proceedings.neurips.cc
Understanding to what extent neural networks memorize training data is an intriguing
question with practical and theoretical implications. In this paper we show that in some …

Unlocking high-accuracy differentially private image classification through scale

S De, L Berrada, J Hayes, SL Smith, B Balle - arXiv preprint arXiv …, 2022 - arxiv.org
Differential Privacy (DP) provides a formal privacy guarantee preventing adversaries with
access to a machine learning model from extracting information about individual training …