Large language models for cyber security: A systematic literature review

HX Xu, SA Wang, N Li, Y Zhao, K Chen, K Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
The rapid advancement of Large Language Models (LLMs) has opened up new
opportunities for leveraging artificial intelligence in various domains, including cybersecurity …

When llms meet cybersecurity: A systematic literature review

J Zhang, H Bu, H Wen, Y Chen, L Li, H Zhu - arXiv preprint arXiv …, 2024 - arxiv.org
The rapid advancements in large language models (LLMs) have opened new avenues
across various fields, including cybersecurity, which faces an ever-evolving threat landscape …

Enhancing Static Analysis for Practical Bug Detection: An LLM-Integrated Approach

H Li, Y Hao, Y Zhai, Z Qian - Proceedings of the ACM on Programming …, 2024 - dl.acm.org
While static analysis is instrumental in uncovering software bugs, its precision in analyzing
large and intricate codebases remains challenging. The emerging prowess of Large …

Vulnerability detection with code language models: How far are we?

Y Ding, Y Fu, O Ibrahim, C Sitawarin, X Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
In the context of the rising interest in code language models (code LMs) and vulnerability
detection, we study the effectiveness of code LMs for detecting vulnerabilities. Our analysis …

A Comprehensive Study of the Capabilities of Large Language Models for Vulnerability Detection

B Steenhoek, MM Rahman, MK Roy, MS Alam… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have demonstrated great potential for code generation and
other software engineering tasks. Vulnerability detection is of crucial importance to …

Do neutral prompts produce insecure code? formai-v2 dataset: Labelling vulnerabilities in code generated by large language models

N Tihanyi, T Bisztray, MA Ferrag, R Jain… - arXiv preprint arXiv …, 2024 - arxiv.org
This study provides a comparative analysis of state-of-the-art large language models
(LLMs), analyzing how likely they generate vulnerabilities when writing simple C programs …

LLM4Vuln: A Unified Evaluation Framework for Decoupling and Enhancing LLMs' Vulnerability Reasoning

Y Sun, D Wu, Y Xue, H Liu, W Ma, L Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) have demonstrated significant poten-tial for many
downstream tasks, including those requiring human-level intelligence, such as vulnerability …

Bridge and Hint: Extending Pre-trained Language Models for Long-Range Code

Y Chen, C Gao, Z Yang, H Zhang, Q Liao - arXiv preprint arXiv …, 2024 - arxiv.org
In the field of code intelligence, effectively modeling long-range code poses a significant
challenge. Existing pre-trained language models (PLMs) such as UniXcoder have achieved …

On the effectiveness of Large Language Models for GitHub Workflows

X Zhang, S Muralee, S Cherupattamoolayil… - arXiv preprint arXiv …, 2024 - arxiv.org
GitHub workflows or GitHub CI is a popular continuous integration platform that enables
developers to automate various software engineering tasks by specifying them as workflows …

Asymmetric Bias in Text-to-Image Generation with Adversarial Attacks

HS Shahgir, X Kong, GV Steeg, Y Dong - arXiv preprint arXiv:2312.14440, 2023 - arxiv.org
The widespread use of Text-to-Image (T2I) models in content generation requires careful
examination of their safety, including their robustness to adversarial attacks. Despite …