Neurosymbolic programming

S Chaudhuri, K Ellis, O Polozov, R Singh… - … and Trends® in …, 2021 - nowpublishers.com
We survey recent work on neurosymbolic programming, an emerging area that bridges the
areas of deep learning and program synthesis. Like in classic machine learning, the goal …

Codegen: An open large language model for code with multi-turn program synthesis

E Nijkamp, B Pang, H Hayashi, L Tu, H Wang… - arXiv preprint arXiv …, 2022 - arxiv.org
Program synthesis strives to generate a computer program as a solution to a given problem
specification, expressed with input-output examples or natural language descriptions. The …

Webshop: Towards scalable real-world web interaction with grounded language agents

S Yao, H Chen, J Yang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Most existing benchmarks for grounding language in interactive environments either lack
realistic linguistic elements, or prove difficult to scale up due to substantial human …

Compositional exemplars for in-context learning

J Ye, Z Wu, J Feng, T Yu… - … Conference on Machine …, 2023 - proceedings.mlr.press
Large pretrained language models (LMs) have shown impressive In-Context Learning (ICL)
ability, where the model learns to do an unseen task simply by conditioning on a prompt …

Synchromesh: Reliable code generation from pre-trained language models

G Poesia, O Polozov, V Le, A Tiwari, G Soares… - arXiv preprint arXiv …, 2022 - arxiv.org
Large pre-trained language models have been used to generate code, providing a flexible
interface for synthesizing programs from natural language specifications. However, they …

Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning

M Nye, M Tessler, J Tenenbaum… - Advances in Neural …, 2021 - proceedings.neurips.cc
Human reasoning can be understood as an interplay between two systems: the intuitive and
associative (" System 1") and the deliberative and logical (" System 2"). Neural sequence …

In-context learning with retrieved demonstrations for language models: A survey

M Luo, X Xu, Y Liu, P Pasupat, M Kazemi - arXiv preprint arXiv …, 2024 - arxiv.org
Language models, especially pre-trained large language models, have showcased
remarkable abilities as few-shot in-context learners (ICL), adept at adapting to new tasks …

Decision-oriented dialogue for human-ai collaboration

J Lin, N Tomlin, J Andreas, J Eisner - Transactions of the Association …, 2024 - direct.mit.edu
We describe a class of tasks called decision-oriented dialogues, in which AI assistants such
as large language models (LMs) must collaborate with one or more humans via natural …

L2CEval: Evaluating Language-to-Code Generation Capabilities of Large Language Models

A Ni, P Yin, Y Zhao, M Riddell, T Feng… - Transactions of the …, 2024 - direct.mit.edu
Recently, large language models (LLMs), especially those that are pretrained on code, have
demonstrated strong capabilities in generating programs from natural language inputs …

Compositionality in computational linguistics

L Donatelli, A Koller - Annual Review of Linguistics, 2023 - annualreviews.org
Neural models greatly outperform grammar-based models across many tasks in modern
computational linguistics. This raises the question of whether linguistic principles, such as …