From word models to world models: Translating from natural language to the probabilistic language of thought

L Wong, G Grand, AK Lew, ND Goodman… - arXiv preprint arXiv …, 2023 - arxiv.org
How does language inform our downstream thinking? In particular, how do humans make
meaning from language--and how can we leverage a theory of linguistic meaning to build …

Amortizing intractable inference in large language models

EJ Hu, M Jain, E Elmoznino, Y Kaddar, G Lajoie… - arXiv preprint arXiv …, 2023 - arxiv.org
Autoregressive large language models (LLMs) compress knowledge from their training data
through next-token conditional distributions. This limits tractable querying of this knowledge …

[PDF][PDF] Efficient guided generation for large language models

BT Willard, R Louf - arXiv preprint arXiv:2307.09702, 2023 - storage.prod.researchhub.com
In this article we show how the problem of neural text generation can be constructively
reformulated in terms of transitions between the states of a finite-state machine. This …

Efficient guided generation for llms

BT Willard, R Louf - arXiv preprint arXiv:2307.09702, 2023 - arxiv.org
In this article we describe an efficient approach to guiding language model text generation
with regular expressions and context-free grammars. Our approach adds little to no …

Derivative-free guidance in continuous and discrete diffusion models with soft value-based decoding

X Li, Y Zhao, C Wang, G Scalia, G Eraslan… - arXiv preprint arXiv …, 2024 - arxiv.org
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA,
RNA, and protein sequences. However, rather than merely generating designs that are …

GAVEL: Generating games via evolution and language models

G Todd, A Padula, M Stephenson, É Piette… - arXiv preprint arXiv …, 2024 - arxiv.org
Automatically generating novel and interesting games is a complex task. Challenges include
representing game rules in a computationally workable form, searching through the large …

Pragmatic Instruction Following and Goal Assistance via Cooperative Language-Guided Inverse Planning

T Zhi-Xuan, L Ying, V Mansinghka… - arXiv preprint arXiv …, 2024 - arxiv.org
People often give instructions whose meaning is ambiguous without further context,
expecting that their actions or goals will disambiguate their intentions. How can we build …

Compile: A large ir dataset from production sources

A Grossman, L Paehler, K Parasyris, T Ben-Nun… - arXiv preprint arXiv …, 2023 - arxiv.org
Code is increasingly becoming a core data modality of modern machine learning research
impacting not only the way we write code with conversational agents like OpenAI's ChatGPT …

Doing experiments and revising rules with natural language and probabilistic reasoning

WT Piriyakulkij, C Langenfeld, TA Le, K Ellis - arXiv preprint arXiv …, 2024 - arxiv.org
We give a model of how to infer natural language rules by doing experiments. The model
integrates Large Language Models (LLMs) with Monte Carlo algorithms for probabilistic …

Reward-Guided Controlled Generation for Inference-Time Alignment in Diffusion Models: Tutorial and Review

M Uehara, Y Zhao, C Wang, X Li, A Regev… - arXiv preprint arXiv …, 2025 - arxiv.org
This tutorial provides an in-depth guide on inference-time guidance and alignment methods
for optimizing downstream reward functions in diffusion models. While diffusion models are …