[HTML][HTML] A survey on large language model (llm) security and privacy: The good, the bad, and the ugly

Y Yao, J Duan, K Xu, Y Cai, Z Sun, Y Zhang - High-Confidence Computing, 2024 - Elsevier
Abstract Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized
natural language understanding and generation. They possess deep language …

Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - arXiv preprint arXiv:2311.05656, 2023 - arxiv.org
Misinformation such as fake news and rumors is a serious threat on information ecosystems
and public trust. The emergence of Large Language Models (LLMs) has great potential to …

Llm evaluators recognize and favor their own generations

A Panickssery, SR Bowman, S Feng - arXiv preprint arXiv:2404.13076, 2024 - arxiv.org
Self-evaluation using large language models (LLMs) has proven valuable not only in
benchmarking but also methods like reward modeling, constitutional AI, and self-refinement …

On protecting the data privacy of large language models (llms): A survey

B Yan, K Li, M Xu, Y Dong, Y Zhang, Z Ren… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) are complex artificial intelligence systems capable of
understanding, generating and translating human language. They learn language patterns …

Watermarks in the sand: Impossibility of strong watermarking for generative models

H Zhang, BL Edelman, D Francati, D Venturi… - arXiv preprint arXiv …, 2023 - arxiv.org
Watermarking generative models consists of planting a statistical signal (watermark) in a
model's output so that it can be later verified that the output was generated by the given …

Detecting multimedia generated by large ai models: A survey

L Lin, N Gupta, Y Zhang, H Ren, CH Liu, F Ding… - arXiv preprint arXiv …, 2024 - arxiv.org
The rapid advancement of Large AI Models (LAIMs), particularly diffusion models and large
language models, has marked a new era where AI-generated multimedia is increasingly …

Can Large Language Models Identify Authorship?

B Huang, C Chen, K Shu - arXiv preprint arXiv:2403.08213, 2024 - arxiv.org
The ability to accurately identify authorship is crucial for verifying content authenticity and
mitigating misinformation. Large Language Models (LLMs) have demonstrated exceptional …

Accuracy pecking order–How 30 AI detectors stack up in detecting generative artificial intelligence content in university English L1 and English L2 student essays

C Chaka - Journal of Applied Learning and Teaching, 2024 - journals.sfu.ca
This study set out to evaluate the accuracy of 30 AI detectors in identifying generative
artificial intelligence (GenAI)-generated and human-written content in university English L1 …

How You Prompt Matters! Even Task-Oriented Constraints in Instructions Affect LLM-Generated Text Detection

R Koike, M Kaneko, N Okazaki - arXiv preprint arXiv:2311.08369, 2023 - arxiv.org
Against the misuse (eg, plagiarism or spreading misinformation) of Large Language Models
(LLMs), many recent works have presented LLM-generated-text detectors with promising …

Detecting scams using large language models

L Jiang - arXiv preprint arXiv:2402.03147, 2024 - arxiv.org
Large Language Models (LLMs) have gained prominence in various applications, including
security. This paper explores the utility of LLMs in scam detection, a critical aspect of …