The evolution of distributed systems for graph neural networks and their origin in graph processing and deep learning: A survey

J Vatter, R Mayer, HA Jacobsen - ACM Computing Surveys, 2023 - dl.acm.org
Graph neural networks (GNNs) are an emerging research field. This specialized deep
neural network architecture is capable of processing graph structured data and bridges the …

[HTML][HTML] Privacy-preserving artificial intelligence in healthcare: Techniques and applications

N Khalid, A Qayyum, M Bilal, A Al-Fuqaha… - Computers in Biology and …, 2023 - Elsevier
There has been an increasing interest in translating artificial intelligence (AI) research into
clinically-validated applications to improve the performance, capacity, and efficacy of …

Extracting training data from diffusion models

N Carlini, J Hayes, M Nasr, M Jagielski… - 32nd USENIX Security …, 2023 - usenix.org
Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted
significant attention due to their ability to generate high-quality synthetic images. In this work …

Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense

K Krishna, Y Song, M Karpinska… - Advances in Neural …, 2024 - proceedings.neurips.cc
The rise in malicious usage of large language models, such as fake content creation and
academic plagiarism, has motivated the development of approaches that identify AI …

Generative language models and automated influence operations: Emerging threats and potential mitigations

JA Goldstein, G Sastry, M Musser, R DiResta… - arXiv preprint arXiv …, 2023 - arxiv.org
Generative language models have improved drastically, and can now produce realistic text
outputs that are difficult to distinguish from human-written content. For malicious actors …

Ablating concepts in text-to-image diffusion models

N Kumari, B Zhang, SY Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful
compositional ability. However, these models are typically trained on an enormous amount …

A survey of machine unlearning

TT Nguyen, TT Huynh, Z Ren, PL Nguyen… - arXiv preprint arXiv …, 2022 - arxiv.org
Today, computer systems hold large amounts of personal data. Yet while such an
abundance of data allows breakthroughs in artificial intelligence, and especially machine …

Quantifying memorization across neural language models

N Carlini, D Ippolito, M Jagielski, K Lee… - arXiv preprint arXiv …, 2022 - arxiv.org
Large language models (LMs) have been shown to memorize parts of their training data,
and when prompted appropriately, they will emit the memorized training data verbatim. This …

Propile: Probing privacy leakage in large language models

S Kim, S Yun, H Lee, M Gubri… - Advances in Neural …, 2024 - proceedings.neurips.cc
The rapid advancement and widespread use of large language models (LLMs) have raised
significant concerns regarding the potential leakage of personally identifiable information …

Red teaming language models with language models

E Perez, S Huang, F Song, T Cai, R Ring… - arXiv preprint arXiv …, 2022 - arxiv.org
Language Models (LMs) often cannot be deployed because of their potential to harm users
in hard-to-predict ways. Prior work identifies harmful behaviors before deployment by using …