Personal llm agents: Insights and survey about the capability, efficiency and security

Y Li, H Wen, W Wang, X Li, Y Yuan, G Liu, J Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Since the advent of personal computing devices, intelligent personal assistants (IPAs) have
been one of the key technologies that researchers and engineers have focused on, aiming …

Grounding foundation models through federated transfer learning: A general framework

Y Kang, T Fan, H Gu, X Zhang, L Fan… - arXiv preprint arXiv …, 2023 - arxiv.org
Foundation Models (FMs) such as GPT-4 encoded with vast knowledge and powerful
emergent abilities have achieved remarkable success in various natural language …

Text embedding inversion security for multilingual language models

Y Chen, H Lent, J Bjerva - … of the 62nd Annual Meeting of the …, 2024 - aclanthology.org
Textual data is often represented as real-numbered embeddings in NLP, particularly with the
popularity of large language models (LLMs) and Embeddings as a Service (EaaS) …

Cogenesis: A framework collaborating large and small language models for secure context-aware instruction following

K Zhang, J Wang, E Hua, B Qi, N Ding… - arXiv preprint arXiv …, 2024 - arxiv.org
With the advancement of language models (LMs), their exposure to private data is
increasingly inevitable, and their deployment (especially for smaller ones) on personal …

Privacy preserving prompt engineering: A survey

K Edemacu, X Wu - arXiv preprint arXiv:2404.06001, 2024 - arxiv.org
Pre-trained language models (PLMs) have demonstrated significant proficiency in solving a
wide range of general natural language processing (NLP) tasks. Researchers have …

Privinfer: Privacy-preserving inference for black-box large language model

M Tong, K Chen, Y Qi, J Zhang, W Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs), such as ChatGPT, have simplified text generation tasks, yet
their inherent privacy risks are increasingly garnering attention. While differential privacy …

Text Embedding Inversion Attacks on Multilingual Language Models

Y Chen, H Lent, J Bjerva - arXiv preprint arXiv:2401.12192, 2024 - arxiv.org
Representing textual information as real-numbered embeddings has become the norm in
NLP. Moreover, with the rise of public interest in large language models (LLMs) …

Differentially Private Multimodal Laplacian Dropout (DP-MLD) for EEG Representative Learning

X Fu, B Wang, X Guo, G Liu, Y Xiang - arXiv preprint arXiv:2409.13440, 2024 - arxiv.org
Recently, multimodal electroencephalogram (EEG) learning has shown great promise in
disease detection. At the same time, ensuring privacy in clinical studies has become …

Just Rewrite It Again: A Post-Processing Method for Enhanced Semantic Similarity and Privacy Preservation of Differentially Private Rewritten Text

S Meisenbacher, F Matthes - … of the 19th International Conference on …, 2024 - dl.acm.org
The study of Differential Privacy (DP) in Natural Language Processing often views the task of
text privatization as a rewriting task, in which sensitive input texts are rewritten to hide …

Protecting Privacy in Classifiers by Token Manipulation

R Harel, Y Elboher, Y Pinter - arXiv preprint arXiv:2407.01334, 2024 - arxiv.org
Using language models as a remote service entails sending private information to an
untrusted provider. In addition, potential eavesdroppers can intercept the messages, thereby …