Pal: Program-aided language models

L Gao, A Madaan, S Zhou, U Alon… - International …, 2023 - proceedings.mlr.press
Large language models (LLMs) have demonstrated an impressive ability to perform
arithmetic and symbolic reasoning tasks, when provided with a few examples at test time (" …

Graph of thoughts: Solving elaborate problems with large language models

M Besta, N Blach, A Kubicek, R Gerstenberger… - Proceedings of the …, 2024 - ojs.aaai.org
Abstract We introduce Graph of Thoughts (GoT): a framework that advances prompting
capabilities in large language models (LLMs) beyond those offered by paradigms such as …

Language models can solve computer tasks

G Kim, P Baldi, S McAleer - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Agents capable of carrying out general tasks on a computer can improve efficiency and
productivity by automating repetitive tasks and assisting in complex problem-solving. Ideally …

Towards reasoning in large language models: A survey

J Huang, KCC Chang - arXiv preprint arXiv:2212.10403, 2022 - arxiv.org
Reasoning is a fundamental aspect of human intelligence that plays a crucial role in
activities such as problem solving, decision making, and critical thinking. In recent years …

Large language models can be easily distracted by irrelevant context

F Shi, X Chen, K Misra, N Scales… - International …, 2023 - proceedings.mlr.press
Large language models have achieved impressive performance on various natural
language processing tasks. However, so far they have been evaluated primarily on …

Decomposed prompting: A modular approach for solving complex tasks

T Khot, H Trivedi, M Finlayson, Y Fu… - arXiv preprint arXiv …, 2022 - arxiv.org
Few-shot prompting is a surprisingly powerful way to use Large Language Models (LLMs) to
solve various tasks. However, this approach struggles as the task complexity increases or …

Reasoning with language model prompting: A survey

S Qiao, Y Ou, N Zhang, X Chen, Y Yao, S Deng… - arXiv preprint arXiv …, 2022 - arxiv.org
Reasoning, as an essential ability for complex problem-solving, can provide back-end
support for various real-world applications, such as medical diagnosis, negotiation, etc. This …

Demonstrate-search-predict: Composing retrieval and language models for knowledge-intensive nlp

O Khattab, K Santhanam, XL Li, D Hall, P Liang… - arXiv preprint arXiv …, 2022 - arxiv.org
Retrieval-augmented in-context learning has emerged as a powerful approach for
addressing knowledge-intensive tasks using frozen language models (LM) and retrieval …

Cognitive architectures for language agents

TR Sumers, S Yao, K Narasimhan… - arXiv preprint arXiv …, 2023 - arxiv.org
Recent efforts have incorporated large language models (LLMs) with external resources (eg,
the Internet) or internal control flows (eg, prompt chaining) for tasks requiring grounding or …

Prompting is programming: A query language for large language models

L Beurer-Kellner, M Fischer, M Vechev - Proceedings of the ACM on …, 2023 - dl.acm.org
Large language models have demonstrated outstanding performance on a wide range of
tasks such as question answering and code generation. On a high level, given an input, a …