We study the problem of private distribution learning with access to public data. In this setup, which we refer to as* public-private learning*, the learner is given public and private …
We study (differentially) private federated learning (FL) of language models. The language models in cross-device FL are relatively small, which can be trained with meaningful formal …
In privacy-preserving machine learning, differentially private stochastic gradient descent (DP- SGD) performs worse than SGD due to per-sample gradient clipping and noise addition. A …
Artificial intelligence (AI) has seen a tremendous surge in capabilities thanks to the use of foundation models trained on internet-scale data. On the flip side, the uncurated nature of …
Generating differentially private (DP) synthetic data that closely resembles the original private data is a scalable way to mitigate privacy concerns in the current data-driven world …
W Krichene, NE Mayoraz, S Rendle… - International …, 2024 - proceedings.mlr.press
We study a class of private learning problems in which the data is a join of private and public features. This is often the case in private personalization tasks such as recommendation or …
Fine-tuning large pretrained models on private datasets may run the risk of violating privacy. Differential privacy is a framework for mitigating privacy risks by enforcing algorithmic …
Privacy-preserving machine learning aims to train models on private data without leaking sensitive information. Differential privacy (DP) is considered the gold standard framework for …
X Gu, G Kamath, ZS Wu - arXiv preprint arXiv:2303.01256, 2023 - arxiv.org
Differentially private stochastic gradient descent privatizes model training by injecting noise into each iteration, where the noise magnitude increases with the number of model …