Security and privacy challenges of large language models: A survey

BC Das, MH Amini, Y Wu - arXiv preprint arXiv:2402.00888, 2024 - arxiv.org
arXiv preprint arXiv:2402.00888, 2024arxiv.org
Large Language Models (LLMs) have demonstrated extraordinary capabilities and
contributed to multiple fields, such as generating and summarizing text, language
translation, and question-answering. Nowadays, LLM is becoming a very popular tool in
computerized language processing tasks, with the capability to analyze complicated
linguistic patterns and provide relevant and appropriate responses depending on the
context. While offering significant advantages, these models are also vulnerable to security …
Large Language Models (LLMs) have demonstrated extraordinary capabilities and contributed to multiple fields, such as generating and summarizing text, language translation, and question-answering. Nowadays, LLM is becoming a very popular tool in computerized language processing tasks, with the capability to analyze complicated linguistic patterns and provide relevant and appropriate responses depending on the context. While offering significant advantages, these models are also vulnerable to security and privacy attacks, such as jailbreaking attacks, data poisoning attacks, and Personally Identifiable Information (PII) leakage attacks. This survey provides a thorough review of the security and privacy challenges of LLMs for both training data and users, along with the application-based risks in various domains, such as transportation, education, and healthcare. We assess the extent of LLM vulnerabilities, investigate emerging security and privacy attacks for LLMs, and review the potential defense mechanisms. Additionally, the survey outlines existing research gaps in this domain and highlights future research directions.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果