Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2023 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

Safeguarding authenticity for mitigating the harms of generative AI: Issues, research agenda, and policies for detection, fact-checking, and ethical AI

AA Hamed, M Zachara-Szymanska, X Wu - IScience, 2024 - cell.com
As the influence of Transformer-based approaches in general and generative AI in particular
continues to expand across various domains, concerns regarding authenticity and …

Detecting multimedia generated by large ai models: A survey

L Lin, N Gupta, Y Zhang, H Ren, CH Liu, F Ding… - arXiv preprint arXiv …, 2024 - arxiv.org
The rapid advancement of Large AI Models (LAIMs), particularly diffusion models and large
language models, has marked a new era where AI-generated multimedia is increasingly …

Mapping the increasing use of llms in scientific papers

W Liang, Y Zhang, Z Wu, H Lepp, W Ji, X Zhao… - arXiv preprint arXiv …, 2024 - arxiv.org
Scientific publishing lays the foundation of science by disseminating research findings,
fostering collaboration, encouraging reproducibility, and ensuring that scientific knowledge …

Llm-as-a-coauthor: The challenges of detecting llm-human mixcase

C Gao, D Chen, Q Zhang, Y Huang, Y Wan… - arXiv preprint arXiv …, 2024 - arxiv.org
With the remarkable development and widespread applications of large language models
(LLMs), the use of machine-generated text (MGT) is becoming increasingly common. This …

Spotting llms with binoculars: Zero-shot detection of machine-generated text

A Hans, A Schwarzschild, V Cherepanova… - arXiv preprint arXiv …, 2024 - arxiv.org
Detecting text generated by modern large language models is thought to be hard, as both
LLMs and humans can exhibit a wide range of complex behaviors. However, we find that a …

Waterbench: Towards holistic evaluation of watermarks for large language models

S Tu, Y Sun, Y Bai, J Yu, L Hou, J Li - arXiv preprint arXiv:2311.07138, 2023 - arxiv.org
To mitigate the potential misuse of large language models (LLMs), recent research has
developed watermarking algorithms, which restrict the generation process to leave an …

LLM-as-a-Coauthor: Can Mixed Human-Written and Machine-Generated Text Be Detected?

Q Zhang, C Gao, D Chen, Y Huang… - Findings of the …, 2024 - aclanthology.org
With the rapid development and widespread application of Large Language Models (LLMs),
the use of Machine-Generated Text (MGT) has become increasingly common, bringing with …

Counterspeakers' Perspectives: Unveiling Barriers and AI Needs in the Fight against Online Hate

J Mun, C Buerger, JT Liang, J Garland… - Proceedings of the CHI …, 2024 - dl.acm.org
Counterspeech, ie, direct responses against hate speech, has become an important tool to
address the increasing amount of hate online while avoiding censorship. Although AI has …

Bypassing LLM Watermarks with Color-Aware Substitutions

Q Wu, V Chandrasekaran - arXiv preprint arXiv:2403.14719, 2024 - arxiv.org
Watermarking approaches are proposed to identify if text being circulated is human or large
language model (LLM) generated. The state-of-the-art watermarking strategy of …