Coderl: Mastering code generation through pretrained models and deep reinforcement learning

H Le, Y Wang, AD Gotmare… - Advances in Neural …, 2022 - proceedings.neurips.cc
Program synthesis or code generation aims to generate a program that satisfies a problem
specification. Recent approaches using large-scale pretrained language models (LMs) have …

Execution-based code generation using deep reinforcement learning

P Shojaee, A Jain, S Tipirneni, CK Reddy - arXiv preprint arXiv …, 2023 - arxiv.org
The utilization of programming language (PL) models, pre-trained on large-scale code
corpora, as a means of automating software engineering processes has demonstrated …

Pangu-coder: Program synthesis with function-level language modeling

F Christopoulou, G Lampouras, M Gritta… - arXiv preprint arXiv …, 2022 - arxiv.org
We present PanGu-Coder, a pretrained decoder-only language model adopting the PanGu-
Alpha architecture for text-to-code generation, ie the synthesis of programming language …

Compilable neural code generation with compiler feedback

X Wang, Y Wang, Y Wan, F Mi, Y Li, P Zhou… - arXiv preprint arXiv …, 2022 - arxiv.org
Automatically generating compilable programs with (or without) natural language
descriptions has always been a touchstone problem for computational linguistics and …

Leveraging grammar and reinforcement learning for neural program synthesis

R Bunel, M Hausknecht, J Devlin, R Singh… - arXiv preprint arXiv …, 2018 - arxiv.org
Program synthesis is the task of automatically generating a program consistent with a
specification. Recent years have seen proposal of a number of neural approaches for …

Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation

J Liu, CS Xia, Y Wang, L Zhang - Advances in Neural …, 2024 - proceedings.neurips.cc
Program synthesis has been long studied with recent approaches focused on directly using
the power of Large Language Models (LLMs) to generate code. Programming benchmarks …

Rltf: Reinforcement learning from unit test feedback

J Liu, Y Zhu, K Xiao, Q Fu, X Han, W Yang… - arXiv preprint arXiv …, 2023 - arxiv.org
The goal of program synthesis, or code generation, is to generate executable code based on
given descriptions. Recently, there has been an increasing number of studies employing …

Improving code generation by training with natural language feedback

A Chen, J Scheurer, T Korbak, JA Campos… - arXiv preprint arXiv …, 2023 - arxiv.org
The potential for pre-trained large language models (LLMs) to use natural language
feedback at inference time has been an exciting recent development. We build upon this …

Planning with large language models for code generation

S Zhang, Z Chen, Y Shen, M Ding… - arXiv preprint arXiv …, 2023 - arxiv.org
Existing large language model-based code generation pipelines typically use beam search
or sampling algorithms during the decoding process. Although the programs they generate …

Latent execution for neural program synthesis beyond domain-specific languages

X Chen, D Song, Y Tian - Advances in Neural Information …, 2021 - proceedings.neurips.cc
Program synthesis from input-output (IO) examples has been a long-standing challenge.
While recent works demonstrated limited success on domain-specific languages (DSL), it …