Unique security and privacy threats of large language model: A comprehensive survey

S Wang, T Zhu, B Liu, M Ding, X Guo, D Ye… - arXiv preprint arXiv …, 2024 - arxiv.org
With the rapid development of artificial intelligence, large language models (LLMs) have
made remarkable advancements in natural language processing. These models are trained …

On protecting the data privacy of large language models (llms): A survey

B Yan, K Li, M Xu, Y Dong, Y Zhang, Z Ren… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) are complex artificial intelligence systems capable of
understanding, generating and translating human language. They learn language patterns …

Bolt: Privacy-preserving, accurate and efficient inference for transformers

Q Pang, J Zhu, H Möllering, W Zheng… - … IEEE Symposium on …, 2024 - ieeexplore.ieee.org
The advent of transformers has brought about significant advancements in traditional
machine learning tasks. However, their pervasive deployment has raised concerns about …

Privacy in large language models: Attacks, defenses and future directions

H Li, Y Chen, J Luo, J Wang, H Peng, Y Kang… - arXiv preprint arXiv …, 2023 - arxiv.org
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …

Bumblebee: Secure two-party inference framework for large transformers

W Lu, Z Huang, Z Gu, J Li, J Liu, C Hong… - Cryptology ePrint …, 2023 - eprint.iacr.org
Large transformer-based models have realized state-of-the-art performance on lots of real-
world tasks such as natural language processing and computer vision. However, with the …

Secure transformer inference made non-interactive

J Zhang, X Yang, L He, K Chen, W Lu… - Cryptology ePrint …, 2024 - eprint.iacr.org
Secure transformer inference has emerged as a prominent research topic following the
proliferation of ChatGPT. Existing solutions are typically interactive, involving substantial …

Grounding foundation models through federated transfer learning: A general framework

Y Kang, T Fan, H Gu, X Zhang, L Fan… - arXiv preprint arXiv …, 2023 - arxiv.org
Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and powerful
emergent abilities have achieved remarkable success in various natural language …

Secformer: Towards fast and accurate privacy-preserving inference for large language models

J Luo, Y Zhang, Z Zhang, J Zhang, X Mu… - arXiv preprint arXiv …, 2024 - arxiv.org
With the growing use of large language models hosted on cloud platforms to offer inference
services, privacy concerns are escalating, especially concerning sensitive data like …

From accuracy to approximation: A survey on approximate homomorphic encryption and its applications

W Liu, L You, Y Shao, X Shen, G Hu, J Shi… - Computer Science …, 2025 - Elsevier
Due to the increasing popularity of application scenarios such as cloud computing, and the
growing concern of users about the security and privacy of their data, information security …

East: Efficient and accurate secure transformer framework for inference

Y Ding, H Guo, Y Guan, W Liu, J Huo, Z Guan… - arXiv preprint arXiv …, 2023 - arxiv.org
Transformer has been successfully used in practical applications, such as ChatGPT, due to
its powerful advantages. However, users' input is leaked to the model provider during the …