Privacy in large language models: Attacks, defenses and future directions

H Li, Y Chen, J Luo, J Wang, H Peng, Y Kang… - arXiv preprint arXiv …, 2023 - arxiv.org
The advancement of large language models (LLMs) has significantly enhanced the ability to
effectively tackle various downstream NLP tasks and unify these tasks into generative …

Bumblebee: Secure two-party inference framework for large transformers

W Lu, Z Huang, Z Gu, J Li, J Liu, C Hong… - Cryptology ePrint …, 2023 - eprint.iacr.org
Large transformer-based models have realized state-of-the-art performance on lots of real-
world tasks such as natural language processing and computer vision. However, with the …

Secure transformer inference made non-interactive

J Zhang, X Yang, L He, K Chen, W Lu… - Cryptology ePrint …, 2024 - eprint.iacr.org
Secure transformer inference has emerged as a prominent research topic following the
proliferation of ChatGPT. Existing solutions are typically interactive, involving substantial …

Secformer: Towards fast and accurate privacy-preserving inference for large language models

J Luo, Y Zhang, Z Zhang, J Zhang, X Mu… - arXiv preprint arXiv …, 2024 - arxiv.org
With the growing use of large language models hosted on cloud platforms to offer inference
services, privacy concerns are escalating, especially concerning sensitive data like …

SecFormer: Fast and Accurate Privacy-Preserving Inference for Transformer Models via SMPC

J Luo, Y Zhang, Z Zhang, J Zhang, X Mu… - Findings of the …, 2024 - aclanthology.org
With the growing use of Transformer models hosted on cloud platforms to offer inference
services, privacy concerns are escalating, especially concerning sensitive data like …

Mpc-minimized secure llm inference

D Rathee, D Li, I Stoica, H Zhang, R Popa - arXiv preprint arXiv …, 2024 - arxiv.org
Many inference services based on large language models (LLMs) pose a privacy concern,
either revealing user prompts to the service or the proprietary weights to the user. Secure …

Approximate homomorphic encryption based privacy-preserving machine learning: a survey

J Yuan, W Liu, J Shi, Q Li - Artificial Intelligence Review, 2025 - Springer
Abstract Machine Learning (ML) is rapidly advancing, enabling various applications that
improve people's work and daily lives. However, this technical progress brings privacy …

Rhombus: Fast Homomorphic Matrix-Vector Multiplication for Secure Two-Party Inference

J He, K Yang, G Tang, Z Huang, L Lin, C Wei… - Proceedings of the …, 2024 - dl.acm.org
We present Rhombus, a new secure matrix-vector multiplication (MVM) protocol in the semi-
honest two-party setting, which is able to be seamlessly integrated into existing privacy …

Powerformer: Efficient privacy-preserving transformer with batch rectifier-power max function and optimized homomorphic attention

D Park, E Lee, JW Lee - Cryptology ePrint Archive, 2024 - eprint.iacr.org
We propose an efficient non-interactive privacy-preserving Transformer inference
architecture called Powerformer. Since softmax is a non-algebraic operation, previous …

Curl: Private LLMs through wavelet-encoded look-up tables

MB Santos, D Mouris, M Ugurbil, S Jarecki… - Cryptology ePrint …, 2024 - eprint.iacr.org
Recent advancements in transformers have revolutionized machine learning, forming the
core of Large language models (LLMs). However, integrating these systems into everyday …