How to dp-fy ml: A practical guide to machine learning with differential privacy

N Ponomareva, H Hazimeh, A Kurakin, Z Xu… - Journal of Artificial …, 2023 - jair.org
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …

Llm-based edge intelligence: A comprehensive survey on architectures, applications, security and trustworthiness

O Friha, MA Ferrag, B Kantarci… - IEEE Open Journal …, 2024 - ieeexplore.ieee.org
The integration of Large Language Models (LLMs) and Edge Intelligence (EI) introduces a
groundbreaking paradigm for intelligent edge devices. With their capacity for human-like …

Emergent and predictable memorization in large language models

S Biderman, U Prashanth, L Sutawika… - Advances in …, 2024 - proceedings.neurips.cc
Memorization, or the tendency of large language models (LLMs) to output entire sequences
from their training data verbatim, is a key concern for deploying language models. In …

Quantifying memorization across neural language models

N Carlini, D Ippolito, M Jagielski, K Lee… - arXiv preprint arXiv …, 2022 - arxiv.org
Large language models (LMs) have been shown to memorize parts of their training data,
and when prompted appropriately, they will emit the memorized training data verbatim. This …

Foundation models and fair use

P Henderson, X Li, D Jurafsky, T Hashimoto… - Journal of Machine …, 2023 - jmlr.org
Existing foundation models are trained on copyrighted material. Deploying these models
can pose both legal and ethical risks when data creators fail to receive appropriate …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Large language models can be strong differentially private learners

X Li, F Tramer, P Liang, T Hashimoto - arXiv preprint arXiv:2110.05679, 2021 - arxiv.org
Differentially Private (DP) learning has seen limited success for building large deep learning
models of text, and straightforward attempts at applying Differentially Private Stochastic …

Differentially private fine-tuning of language models

D Yu, S Naik, A Backurs, S Gopi, HA Inan… - arXiv preprint arXiv …, 2021 - arxiv.org
We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-
scale pre-trained language models, which achieve the state-of-the-art privacy versus utility …

Flocks of stochastic parrots: Differentially private prompt learning for large language models

H Duan, A Dziedzic, N Papernot… - Advances in Neural …, 2024 - proceedings.neurips.cc
Large language models (LLMs) are excellent in-context learners. However, the sensitivity of
data contained in prompts raises privacy concerns. Our work first shows that these concerns …

What does it mean for a language model to preserve privacy?

H Brown, K Lee, F Mireshghallah, R Shokri… - Proceedings of the 2022 …, 2022 - dl.acm.org
Natural language reflects our private lives and identities, making its privacy concerns as
broad as those of real life. Language models lack the ability to understand the context and …