Combating misinformation in the age of llms: Opportunities and challenges

C Chen, K Shu - AI Magazine, 2023 - Wiley Online Library
Misinformation such as fake news and rumors is a serious threat for information ecosystems
and public trust. The emergence of large language models (LLMs) has great potential to …

Large language models on graphs: A comprehensive survey

B Jin, G Liu, C Han, M Jiang, H Ji, J Han - arXiv preprint arXiv:2312.02783, 2023 - arxiv.org
Large language models (LLMs), such as ChatGPT and LLaMA, are creating significant
advancements in natural language processing, due to their strong text encoding/decoding …

Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation

J Liu, CS Xia, Y Wang, L Zhang - Advances in Neural …, 2024 - proceedings.neurips.cc
Program synthesis has been long studied with recent approaches focused on directly using
the power of Large Language Models (LLMs) to generate code. Programming benchmarks …

Mixtral of experts

AQ Jiang, A Sablayrolles, A Roux, A Mensch… - arXiv preprint arXiv …, 2024 - arxiv.org
We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model. Mixtral has
the same architecture as Mistral 7B, with the difference that each layer is composed of 8 …

Chatgpt for robotics: Design principles and model abilities

SH Vemprala, R Bonatti, A Bucker, A Kapoor - IEEE Access, 2024 - ieeexplore.ieee.org
This paper presents an experimental study regarding the use of OpenAI's ChatGPT for
robotics applications. We outline a strategy that combines design principles for prompt …

Large language models are zero-shot rankers for recommender systems

Y Hou, J Zhang, Z Lin, H Lu, R Xie, J McAuley… - … on Information Retrieval, 2024 - Springer
Recently, large language models (LLMs)(eg, GPT-4) have demonstrated impressive general-
purpose task-solving abilities, including the potential to approach recommendation tasks …

Zephyr: Direct distillation of lm alignment

L Tunstall, E Beeching, N Lambert, N Rajani… - arXiv preprint arXiv …, 2023 - arxiv.org
We aim to produce a smaller language model that is aligned to user intent. Previous
research has shown that applying distilled supervised fine-tuning (dSFT) on larger models …

Metamath: Bootstrap your own mathematical questions for large language models

L Yu, W Jiang, H Shi, J Yu, Z Liu, Y Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have pushed the limits of natural language understanding
and exhibited excellent problem-solving ability. Despite the great success, most existing …

Large language models: A survey

S Minaee, T Mikolov, N Nikzad, M Chenaghlu… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have drawn a lot of attention due to their strong
performance on a wide range of natural language tasks, since the release of ChatGPT in …

Trustllm: Trustworthiness in large language models

L Sun, Y Huang, H Wang, S Wu, Q Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …