A survey on deep learning for software engineering

Y Yang, X Xia, D Lo, J Grundy - ACM Computing Surveys (CSUR), 2022 - dl.acm.org
In 2006, Geoffrey Hinton proposed the concept of training “Deep Neural Networks (DNNs)”
and an improved model training method to break the bottleneck of neural network …

Abstraction and analogy‐making in artificial intelligence

M Mitchell - Annals of the New York Academy of Sciences, 2021 - Wiley Online Library
Conceptual abstraction and analogy‐making are key abilities underlying humans' abilities to
learn, reason, and robustly adapt their knowledge to new domains. Despite a long history of …

Voyager: An open-ended embodied agent with large language models

G Wang, Y Xie, Y Jiang, A Mandlekar, C Xiao… - arXiv preprint arXiv …, 2023 - arxiv.org
We introduce Voyager, the first LLM-powered embodied lifelong learning agent in Minecraft
that continuously explores the world, acquires diverse skills, and makes novel discoveries …

Metagpt: Meta programming for multi-agent collaborative framework

S Hong, X Zheng, J Chen, Y Cheng, J Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Recently, remarkable progress has been made in automated task-solving through the use of
multi-agent driven by large language models (LLMs). However, existing LLM-based multi …

Faster sorting algorithms discovered using deep reinforcement learning

DJ Mankowitz, A Michi, A Zhernov, M Gelmi, M Selvi… - Nature, 2023 - nature.com
Fundamental algorithms such as sorting or hashing are used trillions of times on any given
day. As demand for computation grows, it has become critical for these algorithms to be as …

Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

A Srivastava, A Rastogi, A Rao, AAM Shoeb… - arXiv preprint arXiv …, 2022 - arxiv.org
Language models demonstrate both quantitative improvement and new qualitative
capabilities with increasing scale. Despite their potentially transformative impact, these new …

Lever: Learning to verify language-to-code generation with execution

A Ni, S Iyer, D Radev, V Stoyanov… - International …, 2023 - proceedings.mlr.press
The advent of large language models trained on code (code LLMs) has led to significant
progress in language-to-code generation. State-of-the-art approaches in this area combine …

Program synthesis with large language models

J Austin, A Odena, M Nye, M Bosma… - arXiv preprint arXiv …, 2021 - arxiv.org
This paper explores the limits of the current generation of large language models for
program synthesis in general purpose programming languages. We evaluate a collection of …

Codet: Code generation with generated tests

B Chen, F Zhang, A Nguyen, D Zan, Z Lin… - arXiv preprint arXiv …, 2022 - arxiv.org
The task of generating code solutions for a given programming problem can benefit from the
use of pre-trained language models such as Codex, which can produce multiple diverse …

Intercode: Standardizing and benchmarking interactive coding with execution feedback

J Yang, A Prabhakar… - Advances in Neural …, 2024 - proceedings.neurips.cc
Humans write code in a fundamentally interactive manner and rely on constant execution
feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have …