The task of repository-level code completion is to continue writing the unfinished code based on a broader context of the repository. While for automated code completion tools, it is …
With direct access to human-written reference as memory, retrieval-augmented generation has achieved much progress in a wide range of text generation tasks. Since better memory …
ABSTRACT Large Language Models (LLM) are a new class of computation engines, łprogrammedž via prompt engineering. Researchers are still learning how to best łprogramž …
Large Language Models (LLM) are a new class of computation engines," programmed" via prompt engineering. Researchers are still learning how to best" program" these LLMs to …
Large language models have demonstrated the capability to perform on machine translation when the input is prompted with a few examples (in-context learning). Translation quality …
This paper describes our system for the low-resource domain adaptation track (Track 3) in Spoken Language Understanding Grand Challenge, which is a part of ICASSP Signal …
Modern ML systems increasingly augment input instances with additional relevant information to enhance final prediction. Despite growing interest in such retrieval …
In few-shot learning, such as meta-learning, few-shot fine-tuning or in-context learning, the limited number of samples used to train a model have a significant impact on the overall …
Y Li, E Shi, D Zheng, K Duan, J Chen… - Proceedings of the 15th …, 2024 - dl.acm.org
Repository-level code generation task involves generating code at a specified location based on unfinished code with repository context. Existing research mainly rely on retrieval …