A bidirectional LSTM language model for code evaluation and repair

MM Rahman, Y Watanobe, K Nakamura - Symmetry, 2021 - mdpi.com
MM Rahman, Y Watanobe, K Nakamura
Symmetry, 2021mdpi.com
Programming is a vital skill in computer science and engineering-related disciplines.
However, developing source code is an error-prone task. Logical errors in code are
particularly hard to identify for both students and professionals, and a single error is
unexpected to end-users. At present, conventional compilers have difficulty identifying many
of the errors (especially logical errors) that can occur in code. To mitigate this problem, we
propose a language model for evaluating source codes using a bidirectional long short-term …
Programming is a vital skill in computer science and engineering-related disciplines. However, developing source code is an error-prone task. Logical errors in code are particularly hard to identify for both students and professionals, and a single error is unexpected to end-users. At present, conventional compilers have difficulty identifying many of the errors (especially logical errors) that can occur in code. To mitigate this problem, we propose a language model for evaluating source codes using a bidirectional long short-term memory (BiLSTM) neural network. We trained the BiLSTM model with a large number of source codes with tuning various hyperparameters. We then used the model to evaluate incorrect code and assessed the model’s performance in three principal areas: source code error detection, suggestions for incorrect code repair, and erroneous code classification. Experimental results showed that the proposed BiLSTM model achieved 50.88% correctness in identifying errors and providing suggestions. Moreover, the model achieved an F-score of approximately 97%, outperforming other state-of-the-art models (recurrent neural networks (RNNs) and long short-term memory (LSTM)).
MDPI
以上显示的是最相近的搜索结果。 查看全部搜索结果