Program synthesis strives to generate a computer program as a solution to a given problem specification, expressed with input-output examples or natural language descriptions. The …
S Yao, H Chen, J Yang… - Advances in Neural …, 2022 - proceedings.neurips.cc
Most existing benchmarks for grounding language in interactive environments either lack realistic linguistic elements, or prove difficult to scale up due to substantial human …
J Ye, Z Wu, J Feng, T Yu… - … Conference on Machine …, 2023 - proceedings.mlr.press
Large pretrained language models (LMs) have shown impressive In-Context Learning (ICL) ability, where the model learns to do an unseen task simply by conditioning on a prompt …
Large pre-trained language models have been used to generate code, providing a flexible interface for synthesizing programs from natural language specifications. However, they …
Human reasoning can be understood as an interplay between two systems: the intuitive and associative (" System 1") and the deliberative and logical (" System 2"). Neural sequence …
Language models, especially pre-trained large language models, have showcased remarkable abilities as few-shot in-context learners (ICL), adept at adapting to new tasks …
We describe a class of tasks called decision-oriented dialogues, in which AI assistants such as large language models (LMs) must collaborate with one or more humans via natural …
Recently, large language models (LLMs), especially those that are pretrained on code, have demonstrated strong capabilities in generating programs from natural language inputs …
L Donatelli, A Koller - Annual Review of Linguistics, 2023 - annualreviews.org
Neural models greatly outperform grammar-based models across many tasks in modern computational linguistics. This raises the question of whether linguistic principles, such as …