Llms can understand encrypted prompt: Towards privacy-computing friendly transformers

X Liu, Z Liu - arXiv preprint arXiv:2305.18396, 2023 - arxiv.org
The community explored to build private inference frameworks for transformer-based large
language models (LLMs) in a server-client setting, where the server holds the model …

Merge: Fast private text generation

Z Liang, P Wang, R Zhang, N Xu, S Zhang… - Proceedings of the …, 2024 - ojs.aaai.org
The drastic increase in language models' parameters has led to a new trend of deploying
models in cloud servers, raising growing concerns about private inference for Transformer …

Split-and-Denoise: Protect large language model inference with local differential privacy

P Mai, R Yan, Z Huang, Y Yang, Y Pang - arXiv preprint arXiv:2310.09130, 2023 - arxiv.org
Large Language Models (LLMs) shows powerful capability in natural language
understanding by capturing hidden semantics in vector space. This process enriches the …

Textfusion: Privacy-preserving pre-trained model inference via token fusion

X Zhou, J Lu, T Gui, R Ma, Z Fei, Y Wang… - Proceedings of the …, 2022 - aclanthology.org
Recently, more and more pre-trained language models are released as a cloud service. It
allows users who lack computing resources to perform inference with a powerful model by …

Privinfer: Privacy-preserving inference for black-box large language model

M Tong, K Chen, Y Qi, J Zhang, W Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs), such as ChatGPT, have simplified text generation tasks, yet
their inherent privacy risks are increasingly garnering attention. While differential privacy …

Privacy preserving large language models: Chatgpt case study based vision and framework

I Ullah, N Hassan, SS Gill, B Suleiman… - arXiv preprint arXiv …, 2023 - arxiv.org
The generative Artificial Intelligence (AI) tools based on Large Language Models (LLMs) use
billions of parameters to extensively analyse large datasets and extract critical private …

Dp-opt: Make large language model your privacy-preserving prompt engineer

J Hong, JT Wang, C Zhang, Z Li, B Li… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have emerged as dominant tools for various tasks,
particularly when tailored for a specific target by prompt tuning. Nevertheless, concerns …

Just fine-tune twice: Selective differential privacy for large language models

W Shi, R Shea, S Chen, C Zhang, R Jia… - arXiv preprint arXiv …, 2022 - arxiv.org
Protecting large language models from privacy leakage is becoming increasingly crucial
with their wide adoption in real-world products. Yet applying differential privacy (DP), a …

Differentially private model compression

F Mireshghallah, A Backurs, HA Inan… - Advances in …, 2022 - proceedings.neurips.cc
Recent papers have shown that large pre-trained language models (LLMs) such as BERT,
GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private …

Privacy-preserving prompt tuning for large language model services

Y Li, Z Tan, Y Liu - arXiv preprint arXiv:2305.06212, 2023 - arxiv.org
Prompt tuning provides an efficient way for users to customize Large Language Models
(LLMs) with their private data in the emerging LLM service scenario. However, the sensitive …