Differentially private federated learning: A systematic review

J Fu, Y Hong, X Ling, L Wang, X Ran, Z Sun… - arXiv preprint arXiv …, 2024 - arxiv.org
In recent years, privacy and security concerns in machine learning have promoted trusted
federated learning to the forefront of research. Differential privacy has emerged as the de …

Sok: Memorization in general-purpose large language models

V Hartmann, A Suri, V Bindschaedler, D Evans… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) are advancing at a remarkable pace, with myriad
applications under development. Unlike most earlier machine learning models, they are no …

Machine learning with confidential computing: A systematization of knowledge

F Mo, Z Tarkhani, H Haddadi - ACM Computing Surveys, 2024 - dl.acm.org
Privacy and security challenges in Machine Learning (ML) have become increasingly
severe, along with ML's pervasive development and the recent demonstration of large attack …

Verifiable and provably secure machine unlearning

T Eisenhofer, D Riepel, V Chandrasekaran… - arXiv preprint arXiv …, 2022 - arxiv.org
Machine unlearning aims to remove points from the training dataset of a machine learning
model after training; for example when a user requests their data to be deleted. While many …

[PDF][PDF] Adversarial machine learning

A Vassilev, A Oprea, A Fordyce, H Anderson - Gaithersburg, MD, 2024 - site.unibo.it
Abstract This NIST Trustworthy and Responsible AI report develops a taxonomy of concepts
and defines terminology in the field of adversarial machine learning (AML). The taxonomy is …

Unforgeability in stochastic gradient descent

T Baluta, I Nikolic, R Jain, D Aggarwal… - Proceedings of the 2023 …, 2023 - dl.acm.org
Stochastic Gradient Descent (SGD) is a popular training algorithm, a cornerstone of modern
machine learning systems. Several security applications benefit from determining if SGD …

Reinforcement unlearning

D Ye, T Zhu, C Zhu, D Wang, S Shen, W Zhou - arXiv preprint arXiv …, 2023 - arxiv.org
Machine unlearning refers to the process of mitigating the influence of specific training data
on machine learning models based on removal requests from data owners. However, one …

Advancing differential privacy: Where we are now and future directions for real-world deployment

R Cummings, D Desfontaines, D Evans… - arXiv preprint arXiv …, 2023 - arxiv.org
In this article, we present a detailed review of current practices and state-of-the-art
methodologies in the field of differential privacy (DP), with a focus of advancing DP's …

Privacy and security implications of cloud-based ai services: A survey

A Luqman, R Mahesh, A Chattopadhyay - arXiv preprint arXiv:2402.00896, 2024 - arxiv.org
This paper details the privacy and security landscape in today's cloud ecosystem and
identifies that there is a gap in addressing the risks introduced by machine learning models …

From Principle to Practice: Vertical Data Minimization for Machine Learning

R Staab, N Jovanović, M Balunović… - arXiv preprint arXiv …, 2023 - arxiv.org
Aiming to train and deploy predictive models, organizations collect large amounts of detailed
client data, risking the exposure of private information in the event of a breach. To mitigate …