AirGapAgent: Protecting Privacy-Conscious Conversational Agents

E Bagdasarian, R Yi, S Ghalebikesabi… - Proceedings of the …, 2024 - dl.acm.org
The growing use of large language model (LLM)-based conversational agents to manage
sensitive user data raises significant privacy concerns. While these agents excel at …

CASE-Bench: Context-Aware Safety Evaluation Benchmark for Large Language Models

G Sun, X Zhan, S Feng, PC Woodland… - arXiv preprint arXiv …, 2025 - arxiv.org
Aligning large language models (LLMs) with human values is essential for their safe
deployment and widespread adoption. Current LLM safety benchmarks often focus solely on …

Permissive Information-Flow Analysis for Large Language Models

SA Siddiqui, R Gaonkar, B Köpf, D Krueger… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) are rapidly becoming commodity components of larger
software systems. This poses natural security and privacy problems: poisoned data retrieved …

AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure

X Chen, Z Zhang, F Yang, X Qin, C Du, X Cheng… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language model (LLM)-based AI delegates are increasingly utilized to act on behalf of
users, assisting them with a wide range of tasks through conversational interfaces. Despite …

Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents

IC Ngong, S Kadhe, H Wang, K Murugesan… - Workshop on Socially …, 2024 - openreview.net
Conversational agents are increasingly woven into individuals' personal lives, yet users
often underestimate the privacy risks involved. In this paper, based on the principles of …