Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2023 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

Survey of vulnerabilities in large language models revealed by adversarial attacks

E Shayegani, MAA Mamun, Y Fu, P Zaree… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) are swiftly advancing in architecture and capability, and as
they integrate more deeply into complex systems, the urgency to scrutinize their security …

Can llm-generated misinformation be detected?

C Chen, K Shu - arXiv preprint arXiv:2309.13788, 2023 - arxiv.org
The advent of Large Language Models (LLMs) has made a transformative impact. However,
the potential that LLMs such as ChatGPT can be exploited to generate misinformation has …

Lmsys-chat-1m: A large-scale real-world llm conversation dataset

L Zheng, WL Chiang, Y Sheng, T Li, S Zhuang… - arXiv preprint arXiv …, 2023 - arxiv.org
Studying how people interact with large language models (LLMs) in real-world scenarios is
increasingly important due to their widespread use in various applications. In this paper, we …

Raising the Bar of AI-generated Image Detection with CLIP

D Cozzolino, G Poggi, R Corvi… - Proceedings of the …, 2024 - openaccess.thecvf.com
The aim of this work is to explore the potential of pre-trained vision-language models (VLMs)
for universal detection of AI-generated images. We develop a lightweight detection strategy …

Rethinking machine unlearning for large language models

S Liu, Y Yao, J Jia, S Casper, N Baracaldo… - arXiv preprint arXiv …, 2024 - arxiv.org
We explore machine unlearning (MU) in the domain of large language models (LLMs),
referred to as LLM unlearning. This initiative aims to eliminate undesirable data influence …

Can large language models provide security & privacy advice? measuring the ability of llms to refute misconceptions

Y Chen, A Arunasalam, ZB Celik - … of the 39th Annual Computer Security …, 2023 - dl.acm.org
Users seek security & privacy (S&P) advice from online resources, including trusted
websites and content-sharing platforms. These resources help users understand S&P …

Removing rlhf protections in gpt-4 via fine-tuning

Q Zhan, R Fang, R Bindu, A Gupta, T Hashimoto… - arXiv preprint arXiv …, 2023 - arxiv.org
As large language models (LLMs) have increased in their capabilities, so does their
potential for dual use. To reduce harmful outputs, produces and vendors of LLMs have used …

Generative Artificial Intelligence for Software Engineering--A Research Agenda

A Nguyen-Duc, B Cabrero-Daniel, A Przybylek… - arXiv preprint arXiv …, 2023 - arxiv.org
Generative Artificial Intelligence (GenAI) tools have become increasingly prevalent in
software development, offering assistance to various managerial and technical project …

Large Language Models for Code Analysis: Do {LLMs} Really Do Their Job?

C Fang, N Miao, S Srivastav, J Liu, R Zhang… - 33rd USENIX Security …, 2024 - usenix.org
Large language models (LLMs) have demonstrated significant potential in the realm of
natural language understanding and programming code processing tasks. Their capacity to …