A fine-tuning enhanced RAG system with quantized influence measure as AI judge

K Rangan, Y Yin - Scientific Reports, 2024 - nature.com
This study presents an innovative enhancement to retrieval-augmented generation (RAG)
systems by seamlessly integrating fine-tuned large language models (LLMs) with vector …

Not merely useful but also amusing: Impact of perceived usefulness and perceived enjoyment on the adoption of ai-powered coding assistant

YW Kim, MC Cha, SH Yoon, SC Lee - International Journal of …, 2024 - Taylor & Francis
Artificial intelligence-powered coding assistants (AI-CAs) have become essential tools in
programming; however, there is limited understanding of the mechanisms driving …

[HTML][HTML] Novel Directions for Neuromorphic Machine Intelligence Guided by Functional Connectivity: A Review

M Illeperuma, R Pina, V De Silva, X Liu - Machines, 2024 - mdpi.com
As we move into the next stages of the technological revolution, artificial intelligence (AI) that
is explainable and sustainable is becoming a key goal for researchers across multiple …

Autosafecoder: A multi-agent framework for securing llm code generation through static analysis and fuzz testing

A Nunez, NT Islam, SK Jha, P Najafirad - arXiv preprint arXiv:2409.10737, 2024 - arxiv.org
Recent advancements in automatic code generation using large language models (LLMs)
have brought us closer to fully automated secure software development. However, existing …

Fixing code generation errors for large language models

H Wen, Y Zhu, C Liu, X Ren, W Du, M Yan - arXiv preprint arXiv …, 2024 - arxiv.org
Code generation leverages artificial intelligence technologies, particularly Large Language
Models (LLMs), to automatically produce source code, enhancing software development …

Using Peer Assessment Leveraging Large Language Models in Software Engineering Education

M Fiore, M Mongiello - International Journal of Software …, 2024 - shibata.yubetsu.com
Using Peer Assessment Leveraging Large Language Models in Software Engineering
Education - Yubetsu Shibata Yubetsu Shibata logo About Browse Contact 1.International …

Less is More: DocString Compression in Code Generation

G Yang, Y Zhou, W Cheng, X Zhang, X Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
The widespread use of Large Language Models (LLMs) in software engineering has
intensified the need for improved model and resource efficiency. In particular, for neural …

DeepCodeProbe: Towards Understanding What Models Trained on Code Learn

V Majdinasab, A Nikanjam, F Khomh - arXiv preprint arXiv:2407.08890, 2024 - arxiv.org
Machine learning models trained on code and related artifacts offer valuable support for
software maintenance but suffer from interpretability issues due to their complex internal …

Why Do Developers Engage with ChatGPT in Issue-Tracker? Investigating Usage and Reliance on ChatGPT-Generated Code

JK Das, S Mondal, CK Roy - arXiv preprint arXiv:2412.06757, 2024 - arxiv.org
Large language models (LLMs) like ChatGPT have shown the potential to assist developers
with coding and debugging tasks. However, their role in collaborative issue resolution is …

ChatGPT in Data Visualization Education: A Student Perspective

NW Kim, HK Ko, G Myers, B Bach - arXiv preprint arXiv:2405.00748, 2024 - arxiv.org
Unlike traditional educational chatbots that rely on pre-programmed responses, large-
language model-driven chatbots, such as ChatGPT, demonstrate remarkable versatility and …