A comprehensive overview of large language models

H Naveed, AU Khan, S Qiu, M Saqib, S Anwar… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in
natural language processing tasks and beyond. This success of LLMs has led to a large …

Crosslingual generalization through multitask finetuning

N Muennighoff, T Wang, L Sutawika, A Roberts… - arXiv preprint arXiv …, 2022 - arxiv.org
Multitask prompted finetuning (MTF) has been shown to help large language models
generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused …

Datasets for large language models: A comprehensive survey

Y Liu, J Cao, C Liu, K Ding, L Jin - arXiv preprint arXiv:2402.18041, 2024 - arxiv.org
This paper embarks on an exploration into the Large Language Model (LLM) datasets,
which play a crucial role in the remarkable advancements of LLMs. The datasets serve as …

Visit-bench: A benchmark for vision-language instruction following inspired by real-world use

Y Bitton, H Bansal, J Hessel, R Shao, W Zhu… - arXiv preprint arXiv …, 2023 - arxiv.org
We introduce VisIT-Bench (Visual InsTruction Benchmark), a benchmark for evaluation of
instruction-following vision-language models for real-world use. Our starting point is curating …

The shifted and the overlooked: A task-oriented investigation of user-GPT interactions

S Ouyang, S Wang, Y Liu, M Zhong, Y Jiao… - arXiv preprint arXiv …, 2023 - arxiv.org
Recent progress in Large Language Models (LLMs) has produced models that exhibit
remarkable performance across a variety of NLP tasks. However, it remains unclear whether …

Active instruction tuning: Improving cross-task generalization by training on prompt sensitive tasks

PN Kung, F Yin, D Wu, KW Chang, N Peng - arXiv preprint arXiv …, 2023 - arxiv.org
Instruction tuning (IT) achieves impressive zero-shot generalization results by training large
language models (LLMs) on a massive amount of diverse tasks with instructions. However …

Muffin: Curating multi-faceted instructions for improving instruction following

R Lou, K Zhang, J Xie, Y Sun, J Ahn, H Xu… - The Twelfth …, 2023 - openreview.net
In the realm of large language models (LLMs), enhancing instruction-following capability
often involves curating expansive training data. This is achieved through two primary …

Data management for large language models: A survey

Z Wang, W Zhong, Y Wang, Q Zhu, F Mi… - arXiv preprint arXiv …, 2023 - arxiv.org
Data plays a fundamental role in the training of Large Language Models (LLMs). Effective
data management, particularly in the formulation of a well-suited training dataset, holds …

Explore-instruct: Enhancing domain-specific instruction coverage through active exploration

F Wan, X Huang, T Yang, X Quan, W Bi… - arXiv preprint arXiv …, 2023 - arxiv.org
Instruction-tuning can be substantially optimized through enhanced diversity, resulting in
models capable of handling a broader spectrum of tasks. However, existing data employed …

Inscl: A data-efficient continual learning paradigm for fine-tuning large language models with instructions

Y Wang, Y Liu, C Shi, H Li, C Chen, H Lu… - arXiv preprint arXiv …, 2024 - arxiv.org
Instruction tuning effectively optimizes Large Language Models (LLMs) for downstream
tasks. Due to the changing environment in real-life applications, LLMs necessitate continual …