When large language models meet personalization: Perspectives of challenges and opportunities

J Chen, Z Liu, X Huang, C Wu, Q Liu, G Jiang, Y Pu… - World Wide Web, 2024 - Springer
The advent of large language models marks a revolutionary breakthrough in artificial
intelligence. With the unprecedented scale of training and model parameters, the capability …

A review on language models as knowledge bases

B AlKhamissi, M Li, A Celikyilmaz, M Diab… - arXiv preprint arXiv …, 2022 - arxiv.org
Recently, there has been a surge of interest in the NLP community on the use of pretrained
Language Models (LMs) as Knowledge Bases (KBs). Researchers have shown that LMs …

Self-instruct: Aligning language models with self-generated instructions

Y Wang, Y Kordi, S Mishra, A Liu, NA Smith… - arXiv preprint arXiv …, 2022 - arxiv.org
Large" instruction-tuned" language models (ie, finetuned to respond to instructions) have
demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they …

Unifying large language models and knowledge graphs: A roadmap

S Pan, L Luo, Y Wang, C Chen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the
field of natural language processing and artificial intelligence, due to their emergent ability …

Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes

CY Hsieh, CL Li, CK Yeh, H Nakhost, Y Fujii… - arXiv preprint arXiv …, 2023 - arxiv.org
Deploying large language models (LLMs) is challenging because they are memory
inefficient and compute-intensive for practical applications. In reaction, researchers train …

A-okvqa: A benchmark for visual question answering using world knowledge

D Schwenk, A Khandelwal, C Clark, K Marino… - European conference on …, 2022 - Springer
Abstract The Visual Question Answering (VQA) task aspires to provide a meaningful testbed
for the development of AI models that can jointly reason over visual and natural language …

Unnatural instructions: Tuning language models with (almost) no human labor

O Honovich, T Scialom, O Levy, T Schick - arXiv preprint arXiv:2212.09689, 2022 - arxiv.org
Instruction tuning enables pretrained language models to perform new tasks from inference-
time natural language descriptions. These approaches rely on vast amounts of human …

Discovering language model behaviors with model-written evaluations

E Perez, S Ringer, K Lukošiūtė, K Nguyen… - arXiv preprint arXiv …, 2022 - arxiv.org
As language models (LMs) scale, they develop many novel behaviors, good and bad,
exacerbating the need to evaluate how they behave. Prior work creates evaluations with …

Synthetic prompting: Generating chain-of-thought demonstrations for large language models

Z Shao, Y Gong, Y Shen, M Huang… - International …, 2023 - proceedings.mlr.press
Large language models can perform various reasoning tasks by using chain-of-thought
prompting, which guides them to find answers through step-by-step demonstrations …

Llms for knowledge graph construction and reasoning: Recent capabilities and future opportunities

Y Zhu, X Wang, J Chen, S Qiao, Y Ou, Y Yao, S Deng… - World Wide Web, 2024 - Springer
This paper presents an exhaustive quantitative and qualitative evaluation of Large
Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We …