Challenges and applications of large language models

J Kaddour, J Harris, M Mozes, H Bradley… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) went from non-existent to ubiquitous in the machine
learning discourse within a few years. Due to the fast pace of the field, it is difficult to identify …

[HTML][HTML] Decoding ChatGPT: a taxonomy of existing research, current challenges, and possible future directions

SS Sohail, F Farhat, Y Himeur, M Nadeem… - Journal of King Saud …, 2023 - Elsevier
Abstract Chat Generative Pre-trained Transformer (ChatGPT) has gained significant interest
and attention since its launch in November 2022. It has shown impressive performance in …

Regulating ChatGPT and other large generative AI models

P Hacker, A Engel, M Mauer - Proceedings of the 2023 ACM Conference …, 2023 - dl.acm.org
Large generative AI models (LGAIMs), such as ChatGPT, GPT-4 or Stable Diffusion, are
rapidly transforming the way we communicate, illustrate, and create. However, AI regulation …

Paraphrasing evades detectors of ai-generated text, but retrieval is an effective defense

K Krishna, Y Song, M Karpinska… - Advances in Neural …, 2024 - proceedings.neurips.cc
The rise in malicious usage of large language models, such as fake content creation and
academic plagiarism, has motivated the development of approaches that identify AI …

Can AI-generated text be reliably detected?

VS Sadasivan, A Kumar, S Balasubramanian… - arXiv preprint arXiv …, 2023 - arxiv.org
The unregulated use of LLMs can potentially lead to malicious consequences such as
plagiarism, generating fake news, spamming, etc. Therefore, reliable detection of AI …

Defending chatgpt against jailbreak attack via self-reminders

Y Xie, J Yi, J Shao, J Curl, L Lyu, Q Chen… - Nature Machine …, 2023 - nature.com
ChatGPT is a societally impactful artificial intelligence tool with millions of users and
integration into products such as Bing. However, the emergence of jailbreak attacks notably …

The science of detecting llm-generated text

R Tang, YN Chuang, X Hu - Communications of the ACM, 2024 - dl.acm.org
ACM: Digital Library: Communications of the ACM ACM Digital Library Communications of the
ACM Volume 67, Number 4 (2024), Pages 50-59 The Science of Detecting LLM-Generated Text …

Identifying and mitigating the security risks of generative ai

C Barrett, B Boyd, E Bursztein, N Carlini… - … and Trends® in …, 2023 - nowpublishers.com
Every major technical invention resurfaces the dual-use dilemma—the new technology has
the potential to be used for good as well as for harm. Generative AI (GenAI) techniques, such …

Undetectable watermarks for language models

M Christ, S Gunn, O Zamir - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
Recent advances in the capabilities of large language models such as GPT-4 have spurred
increasing concern about our ability to detect AI-generated text. Prior works have suggested …

ChatGPT: More than a “weapon of mass deception” ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective

AJG Sison, MT Daza, R Gozalo-Brizuela… - … Journal of Human …, 2024 - Taylor & Francis
This article explores the ethical problems arising from the use of ChatGPT as a kind of
generative AI and suggests responses based on the Human-Centered Artificial Intelligence …