Compositional semantic parsing with large language models

A Drozdov, N Schärli, E Akyürek, N Scales… - The Eleventh …, 2022 - openreview.net
Humans can reason compositionally when presented with new tasks. Previous research
shows that appropriate prompting techniques enable large language models (LLMs) to …

Compositional exemplars for in-context learning

J Ye, Z Wu, J Feng, T Yu… - … Conference on Machine …, 2023 - proceedings.mlr.press
Large pretrained language models (LMs) have shown impressive In-Context Learning (ICL)
ability, where the model learns to do an unseen task simply by conditioning on a prompt …

LexSym: Compositionality as lexical symmetry

E Akyürek, J Andreas - Proceedings of the 61st Annual Meeting of …, 2023 - aclanthology.org
In tasks like semantic parsing, instruction following, and question answering, standard deep
networks fail to generalize compositionally from small datasets. Many existing approaches …

Recogs: How incidental details of a logical form overshadow an evaluation of semantic interpretation

Z Wu, CD Manning, C Potts - Transactions of the Association for …, 2023 - direct.mit.edu
Compositional generalization benchmarks for semantic parsing seek to assess whether
models can accurately compute meanings for novel sentences, but operationalize this in …

Compositionality in computational linguistics

L Donatelli, A Koller - Annual Review of Linguistics, 2023 - annualreviews.org
Neural models greatly outperform grammar-based models across many tasks in modern
computational linguistics. This raises the question of whether linguistic principles, such as …

Break it down: Evidence for structural compositionality in neural networks

M Lepori, T Serre, E Pavlick - Advances in Neural …, 2023 - proceedings.neurips.cc
Though modern neural networks have achieved impressive performance in both vision and
language tasks, we know little about the functions that they implement. One possibility is that …

Uncontrolled lexical exposure leads to overestimation of compositional generalization in pretrained models

N Kim, T Linzen, P Smolensky - arXiv preprint arXiv:2212.10769, 2022 - arxiv.org
Human linguistic capacity is often characterized by compositionality and the generalization it
enables--human learners can produce and comprehend novel complex expressions by …

Philosophy of cognitive science in the age of deep learning

R Millière - Wiley Interdisciplinary Reviews: Cognitive Science, 2024 - Wiley Online Library
Deep learning has enabled major advances across most areas of artificial intelligence
research. This remarkable progress extends beyond mere engineering achievements and …

Language models as models of language

R Millière - arXiv preprint arXiv:2408.07144, 2024 - arxiv.org
This chapter critically examines the potential contributions of modern language models to
theoretical linguistics. Despite their focus on engineering goals, these models' ability to …

Generating Data for Symbolic Language with Large Language Models

J Ye, C Li, L Kong, T Yu - arXiv preprint arXiv:2305.13917, 2023 - arxiv.org
While large language models (LLMs) bring not only performance but also complexity, recent
work has started to turn LLMs into data generators rather than task inferencers, where …