M Christ, S Gunn, O Zamir - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
Recent advances in the capabilities of large language models such as GPT-4 have spurred increasing concern about our ability to detect AI-generated text. Prior works have suggested …
The recent advancements in large language models (LLMs) have sparked a growing apprehension regarding the potential misuse. One approach to mitigating this risk is to …
We present REMARK-LLM, a novel efficient, and robust watermarking framework designed for texts generated by large language models (LLMs). Synthesizing human-like content …
We construct the first provable watermarking scheme for language models with public detectability or verifiability: we use a private key for watermarking and a public key for …
KY Yoo, W Ahn, N Kwak - arXiv preprint arXiv:2308.00221, 2023 - arxiv.org
This study aims to proactively tackle misuse of large language models beyond identification of machine-generated text. While existing methods focus on detection, some malicious …
Watermarking techniques offer a promising way to secure data via embedding covert information into the data. A paramount challenge in the domain lies in preserving the …
As the use of Large Language Models (LLMs) in text generation tasks proliferates, concerns arise over their potential to compromise academic integrity. The education sector currently …
Z Guo, S Yu - arXiv preprint arXiv:2311.07700, 2023 - arxiv.org
Large language models (LLMs) have opened up enormous opportunities while simultaneously posing ethical dilemmas. One of the major concerns is their ability to create …
KY Yoo, W Ahn, N Kwak - Proceedings of the 2024 Conference of …, 2024 - aclanthology.org
We show the viability of tackling misuses of large language models beyond the identification of machine-generated text. While existing zero-bit watermark methods focus on detection …