A systematic survey of prompt engineering in large language models: Techniques and applications

P Sahoo, AK Singh, S Saha, V Jain, S Mondal… - arXiv preprint arXiv …, 2024 - arxiv.org
Prompt engineering has emerged as an indispensable technique for extending the
capabilities of large language models (LLMs) and vision-language models (VLMs). This …

Efficient large language models: A survey

Z Wan, X Wang, C Liu, S Alam, Y Zheng, J Liu… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have demonstrated remarkable capabilities in important
tasks such as natural language understanding and language generation, and thus have the …

Computational argumentation-based chatbots: a survey

F Castagna, N Kökciyan, I Sassoon, S Parsons… - Journal of Artificial …, 2024 - jair.org
Chatbots are conversational software applications designed to interact dialectically with
users for a plethora of different purposes. Surprisingly, these colloquial agents have only …

Recursive introspection: Teaching language model agents how to self-improve

Y Qu, T Zhang, N Garg, A Kumar - arXiv preprint arXiv:2407.18219, 2024 - arxiv.org
A central piece in enabling intelligent agentic behavior in foundation models is to make them
capable of introspecting upon their behavior, reasoning, and correcting their mistakes as …

In-context principle learning from mistakes

T Zhang, A Madaan, L Gao, S Zheng, S Mishra… - arXiv preprint arXiv …, 2024 - arxiv.org
In-context learning (ICL, also known as few-shot prompting) has been the standard method
of adapting LLMs to downstream tasks, by learning from a few input-output examples …

Recursive introspection: Teaching LLM agents how to self-improve

Y Qu, T Zhang, N Garg, A Kumar - ICML 2024 Workshop on …, 2024 - openreview.net
A central piece in enabling intelligent agentic behavior in foundation models is to make them
capable of introspecting upon their behavior, to reason and correct their mistakes. However …

Key-point-driven data synthesis with its enhancement on mathematical reasoning

Y Huang, X Liu, Y Gong, Z Gou, Y Shen… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) have shown great potential in complex reasoning tasks, yet
their performance is often hampered by the scarcity of high-quality, reasoning-focused …

Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?

Z Zhou, R Tao, J Zhu, Y Luo, Z Wang, B Han - arXiv preprint arXiv …, 2024 - arxiv.org
This paper investigates an under-explored challenge in large language models (LLMs):
chain-of-thought prompting with noisy rationales, which include irrelevant or inaccurate …

Just say the name: Online continual learning with category names only via data generation

M Seo, S Cho, M Lee, D Misra, H Choi, SJ Kim… - arXiv preprint arXiv …, 2024 - arxiv.org
Requiring extensive human supervision is often impractical for continual learning due to its
cost, leading to the emergence of'name-only continual learning'that only provides the name …

Retrieved in-context principles from previous mistakes

H Sun, Y Jiang, B Wang, Y Hou, Y Zhang, P Xie… - arXiv preprint arXiv …, 2024 - arxiv.org
In-context learning (ICL) has been instrumental in adapting Large Language Models (LLMs)
to downstream tasks using correct input-output examples. Recent advances have attempted …