Large language models for code (LLM4Code), which demonstrate strong performance (eg, high accuracy) in processing source code, have significantly transformed software …
Pre-trained language models of code are now widely used in various software engineering tasks such as code generation, code completion, vulnerability detection, etc. This, in turn …
Context: Pre-trained models (PTMs) have demonstrated significant potential in automatic code translation. However, the vulnerability of these models in translation tasks, particularly …
Introduction Artificial Intelligence (AI) is increasingly used as a helper to develop computing programs. While it can boost software development and improve coding proficiency, this …
Z Tian, J Chen, X Zhang - 2023 38th IEEE/ACM International …, 2023 - ieeexplore.ieee.org
Deep learning has been widely adopted to tackle various code-based tasks by building deep code models based on a large amount of code snippets. While these deep code …
S Cao, X Sun, X Wu, D Lo, L Bo, B Li… - Proceedings of the IEEE …, 2024 - dl.acm.org
Recently, Graph Neural Network (GNN)-based vulnerability detection systems have achieved remarkable success. However, the lack of explainability poses a critical challenge …
Y Yang, H Fan, C Lin, Q Li, Z Zhao… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
State-of-the-art source code classification models exhibit excellent task transferability, in which the source code encoders are first pre-trained on a source domain dataset in a self …
S Abid, X Cai, L Jiang - Empirical Software Engineering, 2025 - Springer
Abstract Deep Neural Network-based models have demonstrated high accuracy for semantic code clone detection. However, the lack of generalization poses a threat to the …
Deep code models (DCMs) have achieved impressive accomplishments and have been widely applied to various code-related tasks. However, existing studies show that some …