X Wan, R Sun, H Nakhost, SO Arik - arXiv preprint arXiv:2406.15708, 2024 - arxiv.org
Large language models have demonstrated remarkable capabilities, but their performance is heavily reliant on effective prompt engineering. Automatic prompt optimization (APO) …
Solving complex real-world tasks requires cycles of actions and observations. This is particularly true in science, where tasks require many cycles of analysis, tool use, and …
This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs). We start by making the key observation that data is instrumental in …
X Xu, Z Wu, R Qiao, A Verma, Y Shu… - Findings of the …, 2024 - aclanthology.org
This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs). We start by making a key observation that data is instrumental in …
Optimal prompt selection is crucial for maximizing large language model (LLM) performance on downstream tasks. As the most powerful models are proprietary and can only be invoked …
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta- learning or few-shot learning, aims to effectively train a model using only a small amount of …