Differentially private natural language models: Recent advances and future directions

L Hu, I Habernal, L Shen, D Wang - arXiv preprint arXiv:2301.09112, 2023 - arxiv.org
Recent developments in deep learning have led to great success in various natural
language processing (NLP) tasks. However, these applications may involve data that …

Privacy-preserving prompt tuning for large language model services

Y Li, Z Tan, Y Liu - arXiv preprint arXiv:2305.06212, 2023 - arxiv.org
Prompt tuning provides an efficient way for users to customize Large Language Models
(LLMs) with their private data in the emerging LLM service scenario. However, the sensitive …

Depn: Detecting and editing privacy neurons in pretrained language models

X Wu, J Li, M Xu, W Dong, S Wu, C Bian… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models pretrained on a huge amount of data capture rich knowledge and
information in the training data. The ability of data memorization and regurgitation in …

Grounding foundation models through federated transfer learning: A general framework

Y Kang, T Fan, H Gu, X Zhang, L Fan… - arXiv preprint arXiv …, 2023 - arxiv.org
Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and powerful
emergent abilities have achieved remarkable success in various natural language …

[HTML][HTML] A Comprehensive Review of Current Trends, Challenges, and Opportunities in Text Data Privacy

S Shahriar, R Dara, R Akalu - Computers & Security, 2025 - Elsevier
The emergence of smartphones and internet accessibility around the globe have enabled
billions of people to be connected to the digital world. Due to the popularity of instant …

You are what you write: Preserving privacy in the era of large language models

R Plant, V Giuffrida, D Gkatzia - arXiv preprint arXiv:2204.09391, 2022 - arxiv.org
Large scale adoption of large language models has introduced a new era of convenient
knowledge transfer for a slew of natural language processing tasks. However, these models …

Privacy preserving prompt engineering: A survey

K Edemacu, X Wu - arXiv preprint arXiv:2404.06001, 2024 - arxiv.org
Pre-trained language models (PLMs) have demonstrated significant proficiency in solving a
wide range of general natural language processing (NLP) tasks. Researchers have …

Textfusion: Privacy-preserving pre-trained model inference via token fusion

X Zhou, J Lu, T Gui, R Ma, Z Fei, Y Wang… - Proceedings of the …, 2022 - aclanthology.org
Recently, more and more pre-trained language models are released as a cloud service. It
allows users who lack computing resources to perform inference with a powerful model by …

Protecting user privacy in remote conversational systems: A privacy-preserving framework based on text sanitization

Z Kan, L Qiao, H Yu, L Peng, Y Gao, D Li - arXiv preprint arXiv:2306.08223, 2023 - arxiv.org
Large Language Models (LLMs) are gaining increasing attention due to their exceptional
performance across numerous tasks. As a result, the general public utilize them as an …

Fair NLP models with differentially private text encoders

G Maheshwari, P Denis, M Keller, A Bellet - arXiv preprint arXiv …, 2022 - arxiv.org
Encoded text representations often capture sensitive attributes about individuals (eg, race or
gender), which raise privacy concerns and can make downstream models unfair to certain …