From word models to world models: Translating from natural language to the probabilistic language of thought

L Wong, G Grand, AK Lew, ND Goodman… - arXiv preprint arXiv …, 2023 - arxiv.org
How does language inform our downstream thinking? In particular, how do humans make
meaning from language--and how can we leverage a theory of linguistic meaning to build …

Reasoning with language model is planning with world model

S Hao, Y Gu, H Ma, JJ Hong, Z Wang, DZ Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have shown remarkable reasoning capabilities, especially
when prompted to generate intermediate reasoning steps (eg, Chain-of-Thought, CoT) …

Language models, agent models, and world models: The law for machine reasoning and planning

Z Hu, T Shu - arXiv preprint arXiv:2312.05230, 2023 - arxiv.org
Despite their tremendous success in many applications, large language models often fall
short of consistent reasoning and planning in various (language, embodied, and social) …

Language models show human-like content effects on reasoning

I Dasgupta, AK Lampinen, SCY Chan… - arXiv preprint arXiv …, 2022 - arxiv.org
Abstract reasoning is a key ability for an intelligent system. Large language models (LMs)
achieve above-chance performance on abstract reasoning tasks, but exhibit many …

Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks

KM Collins, C Wong, J Feng, M Wei… - arXiv preprint arXiv …, 2022 - arxiv.org
Human language offers a powerful window into our thoughts--we tell stories, give
explanations, and express our beliefs and goals through words. Abundant evidence also …

Inner monologue: Embodied reasoning through planning with language models

W Huang, F Xia, T Xiao, H Chan, J Liang… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent works have shown how the reasoning capabilities of Large Language Models
(LLMs) can be applied to domains beyond natural language processing, such as planning …

Statler: State-maintaining language models for embodied reasoning

T Yoneda, J Fang, P Li, H Zhang, T Jiang, S Lin… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) provide a promising tool that enable robots to perform
complex robot reasoning tasks. However, the limited context window of contemporary LLMs …

Tree of thoughts: Deliberate problem solving with large language models

S Yao, D Yu, J Zhao, I Shafran… - Advances in …, 2024 - proceedings.neurips.cc
Abstract Language models are increasingly being deployed for general problem solving
across a wide range of tasks, but are still confined to token-level, left-to-right decision …

Chain of code: Reasoning with a language model-augmented code emulator

C Li, J Liang, A Zeng, X Chen, K Hausman… - arXiv preprint arXiv …, 2023 - arxiv.org
Code provides a general syntactic structure to build complex programs and perform precise
computations when paired with a code interpreter--we hypothesize that language models …

Facts as experts: Adaptable and interpretable neural memory over symbolic knowledge

P Verga, H Sun, LB Soares, WW Cohen - arXiv preprint arXiv:2007.00849, 2020 - arxiv.org
Massive language models are the core of modern NLP modeling and have been shown to
encode impressive amounts of commonsense and factual information. However, that …