M Christ, S Gunn, O Zamir - The Thirty Seventh Annual …, 2024 - proceedings.mlr.press
Recent advances in the capabilities of large language models such as GPT-4 have spurred increasing concern about our ability to detect AI-generated text. Prior works have suggested …
The recent advancements in large language models (LLMs) have sparked a growing apprehension regarding the potential misuse. One approach to mitigating this risk is to …
With the remarkable generation performance of large language models, ethical and legal concerns about using them have been raised, such as plagiarism and copyright issues. For …
We construct the first provable watermarking scheme for language models with public detectability or verifiability: we use a private key for watermarking and a public key for …
Watermarking techniques offer a promising way to secure data via embedding covert information into the data. A paramount challenge in the domain lies in preserving the …
O Zamir - arXiv preprint arXiv:2401.10360, 2024 - arxiv.org
We introduce a cryptographic method to hide an arbitrary secret payload in the response of a Large Language Model (LLM). A secret key is required to extract the payload from the …
S Tu, Y Sun, Y Bai, J Yu, L Hou, J Li - arXiv preprint arXiv:2311.07138, 2023 - arxiv.org
To mitigate the potential misuse of large language models (LLMs), recent research has developed watermarking algorithms, which restrict the generation process to leave an …
O Zamir - Transactions on Machine Learning Research, 2024 - openreview.net
We introduce a cryptographic method to hide an arbitrary secret payload in the response of a Large Language Model (LLM). A secret key is required to extract the payload from the …
The advent of Large Language Models (LLMs) has revolutionized text generation, producing outputs that closely mimic human writing. This blurring of lines between machine-and …