Discovering causal relations and equations from data

G Camps-Valls, A Gerhardus, U Ninad, G Varando… - Physics Reports, 2023 - Elsevier
Physics is a field of science that has traditionally used the scientific method to answer
questions about why natural phenomena occur and to make testable models that explain the …

The relational bottleneck as an inductive bias for efficient abstraction

TW Webb, SM Frankland, A Altabaa, S Segert… - Trends in Cognitive …, 2024 - cell.com
A central challenge for cognitive science is to explain how abstract concepts are acquired
from limited experience. This has often been framed in terms of a dichotomy between …

Large language models are human-level prompt engineers

Y Zhou, AI Muresanu, Z Han, K Paster, S Pitis… - arXiv preprint arXiv …, 2022 - arxiv.org
By conditioning on natural language instructions, large language models (LLMs) have
displayed impressive capabilities as general-purpose computers. However, task …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Large language models as general pattern machines

S Mirchandani, F Xia, P Florence, B Ichter… - arXiv preprint arXiv …, 2023 - arxiv.org
We observe that pre-trained large language models (LLMs) are capable of autoregressively
completing complex token sequences--from arbitrary ones procedurally generated by …

Cognitive architectures for language agents

TR Sumers, S Yao, K Narasimhan… - arXiv preprint arXiv …, 2023 - arxiv.org
Recent efforts have incorporated large language models (LLMs) with external resources (eg,
the Internet) or internal control flows (eg, prompt chaining) for tasks requiring grounding or …

Combining data and theory for derivable scientific discovery with AI-Descartes

C Cornelio, S Dash, V Austel, TR Josephson… - Nature …, 2023 - nature.com
Scientists aim to discover meaningful formulae that accurately describe experimental data.
Mathematical models of natural phenomena can be manually created from domain …

Planning with large language models for code generation

S Zhang, Z Chen, Y Shen, M Ding… - arXiv preprint arXiv …, 2023 - arxiv.org
Existing large language model-based code generation pipelines typically use beam search
or sampling algorithms during the decoding process. Although the programs they generate …

DreamCoder: growing generalizable, interpretable knowledge with wake–sleep Bayesian program learning

K Ellis, L Wong, M Nye… - … of the Royal …, 2023 - royalsocietypublishing.org
Expert problem-solving is driven by powerful languages for thinking about problems and
their solutions. Acquiring expertise means learning these languages—systems of concepts …

Demystifying gpt self-repair for code generation

TX Olausson, JP Inala, C Wang, J Gao… - CoRR, 2023 - openreview.net
Large language models have shown remarkable aptitude in code generation, but still
struggle to perform complex tasks. Self-repair--in which the model debugs and repairs its …