React: Synergizing reasoning and acting in language models

S Yao, J Zhao, D Yu, N Du, I Shafran… - arXiv preprint arXiv …, 2022 - arxiv.org
While large language models (LLMs) have demonstrated impressive capabilities across
tasks in language understanding and interactive decision making, their abilities for …

Towards reasoning in large language models: A survey

J Huang, KCC Chang - arXiv preprint arXiv:2212.10403, 2022 - arxiv.org
Reasoning is a fundamental aspect of human intelligence that plays a crucial role in
activities such as problem solving, decision making, and critical thinking. In recent years …

Do as i can, not as i say: Grounding language in robotic affordances

M Ahn, A Brohan, N Brown, Y Chebotar… - arXiv preprint arXiv …, 2022 - arxiv.org
Large language models can encode a wealth of semantic knowledge about the world. Such
knowledge could be extremely useful to robots aiming to act upon high-level, temporally …

Large language models still can't plan (a benchmark for llms on planning and reasoning about change)

K Valmeekam, A Olmo, S Sreedharan… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent advances in large language models (LLMs) have transformed the field of natural
language processing (NLP). From GPT-3 to PaLM, the state-of-the-art performance on …

The alignment problem from a deep learning perspective

R Ngo, L Chan, S Mindermann - arXiv preprint arXiv:2209.00626, 2022 - arxiv.org
In coming decades, artificial general intelligence (AGI) may surpass human capabilities at
many critical tasks. We argue that, without substantial effort to prevent it, AGIs could learn to …

Robotic skill acquisition via instruction augmentation with vision-language models

T Xiao, H Chan, P Sermanet, A Wahid… - arXiv preprint arXiv …, 2022 - arxiv.org
In recent years, much progress has been made in learning robotic manipulation policies that
follow natural language instructions. Such methods typically learn from corpora of robot …

[PDF][PDF] PDDL planning with pretrained large language models

T Silver, V Hariprasad, RS Shuttleworth… - … foundation models for …, 2022 - drive.google.com
We study few-shot prompting of pretrained large language models (LLMs) towards solving
PDDL planning problems. We are interested in two questions:(1) To what extent can LLMs …

Retrospectives on the embodied ai workshop

M Deitke, D Batra, Y Bisk, T Campari, AX Chang… - arXiv preprint arXiv …, 2022 - arxiv.org
We present a retrospective on the state of Embodied AI research. Our analysis focuses on
13 challenges presented at the Embodied AI Workshop at CVPR. These challenges are …

Large language models are not zero-shot communicators

L Ruis, A Khan, S Biderman, S Hooker… - arXiv preprint arXiv …, 2022 - arxiv.org
Despite widespread use of LLMs as conversational agents, evaluations of performance fail
to capture a crucial aspect of communication: interpreting language in context--incorporating …

Composing ensembles of pre-trained models via iterative consensus

S Li, Y Du, JB Tenenbaum, A Torralba… - arXiv preprint arXiv …, 2022 - arxiv.org
Large pre-trained models exhibit distinct and complementary capabilities dependent on the
data they are trained on. Language models such as GPT-3 are capable of textual reasoning …