Neural Code Intelligence--leveraging deep learning to understand, generate, and optimize code--holds immense potential for transformative impacts on the whole society. Bridging the …
J Jiang, F Wang, J Shen, S Kim, S Kim - arXiv preprint arXiv:2406.00515, 2024 - arxiv.org
Large Language Models (LLMs) have garnered remarkable advancements across diverse code-related tasks, known as Code LLMs, particularly in code generation that generates …
Pre-trained code models have emerged as crucial tools in various code intelligence tasks. However, their effectiveness depends on the quality of the pre-training dataset, particularly …
D Park, J Lee, H Jeong, S Park, S Lee - arXiv preprint arXiv:2403.12675, 2024 - arxiv.org
The current evaluation of Large Language Models (LLMs) predominantly relies on benchmarks focusing on their embedded knowledge by testing through multiple-choice …
W He, W Zhang, Y Jin, Q Zhou, H Zhang… - Journal of Medical Internet …, 2024 - jmir.org
Background There is a dearth of feasibility assessments regarding using large language models (LLMs) for responding to inquiries from autistic patients within a Chinese-language …
Diffusion models have gained attention in text processing, offering many potential advantages over traditional autoregressive models. This work explores the integration of …
Large Code Generation Models (LCGMs) have garnered significant attention and achieved promising results across various programming tasks. However, concerns arise regarding …
M Liu, R Liu, H Wang, W Buntine - arXiv preprint arXiv:2405.00704, 2024 - arxiv.org
ChatGPT has changed the AI community and an active research line is the performance evaluation of ChatGPT. A key challenge for the evaluation is that ChatGPT is still closed …
C Wang, J Zhao, J Gong - arXiv preprint arXiv:2403.18969, 2024 - arxiv.org
Recent advancements in Large Language Models (LLMs), particularly those built on Transformer architectures, have significantly broadened the scope of natural language …