Challenges and applications of large language models

J Kaddour, J Harris, M Mozes, H Bradley… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …

[HTML][HTML] A survey on large language model (llm) security and privacy: The good, the bad, and the ugly

Y Yao, J Duan, K Xu, Y Cai, Z Sun, Y Zhang - High-Confidence Computing, 2024 - Elsevier
Abstract Large Language Models (LLMs), such as ChatGPT and Bard, have revolutionized
natural language understanding and generation. They possess deep language …

Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense

K Krishna, Y Song, M Karpinska… - Advances in Neural …, 2024 - proceedings.neurips.cc
The rise in malicious usage of large language models, such as fake content creation and
academic plagiarism, has motivated the development of approaches that identify AI …

Can AI-generated text be reliably detected?

VS Sadasivan, A Kumar, S Balasubramanian… - arXiv preprint arXiv …, 2023 - arxiv.org
The unregulated use of LLMs can potentially lead to malicious consequences such as
plagiarism, generating fake news, spamming, etc. Therefore, reliable detection of AI …

Trustworthy llms: a survey and guideline for evaluating large language models' alignment

Y Liu, Y Yao, JF Ton, X Zhang, R Guo, H Cheng… - arXiv preprint arXiv …, 2023 - arxiv.org
Ensuring alignment, which refers to making models behave in accordance with human
intentions [1, 2], has become a critical task before deploying large language models (LLMs) …

Trustllm: Trustworthiness in large language models

Y Huang, L Sun, H Wang, S Wu, Q Zhang, Y Li… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …

Identifying and mitigating the security risks of generative ai

C Barrett, B Boyd, E Bursztein, N Carlini… - … and Trends® in …, 2023 - nowpublishers.com
Every major technical invention resurfaces the dual-use dilemma—the new technology has
the potential to be used for good as well as for harm. Generative AI (GenAI) techniques, such …

Self-consuming generative models go mad

S Alemohammad, J Casco-Rodriguez, L Luzi… - arXiv preprint arXiv …, 2023 - arxiv.org
Seismic advances in generative AI algorithms for imagery, text, and other data types has led
to the temptation to use synthetic data to train next-generation models. Repeating this …

A survey of text watermarking in the era of large language models

A Liu, L Pan, Y Lu, J Li, X Hu, X Zhang, L Wen… - ACM Computing …, 2024 - dl.acm.org
Text watermarking algorithms are crucial for protecting the copyright of textual content.
Historically, their capabilities and application scenarios were limited. However, recent …

Unbiased watermark for large language models

Z Hu, L Chen, X Wu, Y Wu, H Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
The recent advancements in large language models (LLMs) have sparked a growing
apprehension regarding the potential misuse. One approach to mitigating this risk is to …