[HTML][HTML] On large language models safety, security, and privacy: A survey

R Zhang, HW Li, XY Qian, WB Jiang… - Journal of Electronic …, 2025 - Elsevier
The integration of artificial intelligence (AI) technology, particularly large language models
(LLMs), has become essential across various sectors due to their advanced language …

Evaluating differentially private synthetic data generation in high-stakes domains

K Ramesh, N Gandhi, P Madaan, L Bauer… - arXiv preprint arXiv …, 2024 - arxiv.org
The difficulty of anonymizing text data hinders the development and deployment of NLP in
high-stakes domains that involve private data, such as healthcare and social services …

Efficient and Private: Memorisation under differentially private parameter-efficient fine-tuning in language models

O Ma, J Passerat-Palmbach, D Usynin - arXiv preprint arXiv:2411.15831, 2024 - arxiv.org
Fine-tuning large language models (LLMs) for specific tasks introduces privacy risks, as
models may inadvertently memorise and leak sensitive training data. While Differential …

SafeSynthDP: Leveraging Large Language Models for Privacy-Preserving Synthetic Data Generation Using Differential Privacy

MMH Nahid, SB Hasan - arXiv preprint arXiv:2412.20641, 2024 - arxiv.org
Machine learning (ML) models frequently rely on training data that may include sensitive or
personal information, raising substantial privacy concerns. Legislative frameworks such as …

A Hassle-free Algorithm for Strong Differential Privacy in Federated Learning Systems

HB McMahan, Z Xu, Y Zhang - Proceedings of the 2024 …, 2024 - aclanthology.org
Differential privacy (DP) and federated learning (FL) are combined as advanced privacy-
preserving methods when training on-device language models in production mobile …