From Detection to Prevention: Explaining Security-Critical Code to Avoid Vulnerabilities
- URL: http://arxiv.org/abs/2602.00711v1
- Date: Sat, 31 Jan 2026 13:16:01 GMT
- Title: From Detection to Prevention: Explaining Security-Critical Code to Avoid Vulnerabilities
- Authors: Ranjith Krishnamurthy, Oshando Johnson, Goran Piskachev, Eric Bodden,
- Abstract summary: This work explores a proactive strategy to prevent vulnerabilities by highlighting code regions that implement security-critical functionality.<n>We present an IntelliJ IDEA plugin prototype that uses code-level software metrics to identify potentially security-critical methods.
- Score: 2.490168997159702
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Security vulnerabilities often arise unintentionally during development due to a lack of security expertise and code complexity. Traditional tools, such as static and dynamic analysis, detect vulnerabilities only after they are introduced in code, leading to costly remediation. This work explores a proactive strategy to prevent vulnerabilities by highlighting code regions that implement security-critical functionality -- such as data access, authentication, and input handling -- and providing guidance for their secure implementation. We present an IntelliJ IDEA plugin prototype that uses code-level software metrics to identify potentially security-critical methods and large language models (LLMs) to generate prevention-oriented explanations. Our initial evaluation on the Spring-PetClinic application shows that the selected metrics identify most known security-critical methods, while an LLM provides actionable, prevention-focused insights. Although these metrics capture structural properties rather than semantic aspects of security, this work lays the foundation for code-level security-aware metrics and enhanced explanations.
Related papers
- Secure Code Generation via Online Reinforcement Learning with Vulnerability Reward Model [60.60587869092729]
Large language models (LLMs) are increasingly used in software development, yet their tendency to generate insecure code remains a major barrier to real-world deployment.<n>We propose SecCoderX, an online reinforcement learning framework for functionality-preserving secure code generation.
arXiv Detail & Related papers (2026-02-07T07:42:07Z) - Towards Verifiably Safe Tool Use for LLM Agents [53.55621104327779]
Large language model (LLM)-based AI agents extend capabilities by enabling access to tools such as data sources, APIs, search engines, code sandboxes, and even other agents.<n>LLMs may invoke unintended tool interactions and introduce risks, such as leaking sensitive data or overwriting critical records.<n>Current approaches to mitigate these risks, such as model-based safeguards, enhance agents' reliability but cannot guarantee system safety.
arXiv Detail & Related papers (2026-01-12T21:31:38Z) - An Empirical Study on the Security Vulnerabilities of GPTs [48.12756684275687]
GPTs are one kind of customized AI agents based on OpenAI's large language models.<n>We present an empirical study on the security vulnerabilities of GPTs.
arXiv Detail & Related papers (2025-11-28T13:30:25Z) - Aspect-Oriented Programming in Secure Software Development: A Case Study of Security Aspects in Web Applications [0.0]
This study investigates the role of Aspect-Oriented Programming (AOP) in enhancing secure software development.<n>We compare AOP-based implementations of security features including authentication, authorization, input validation, encryption, logging, and session management.<n>The findings demonstrate that AOP enhances modularity, reusability, and maintainability of security mechanisms, while introducing only minimal performance overhead.
arXiv Detail & Related papers (2025-09-09T07:12:55Z) - Secure Tug-of-War (SecTOW): Iterative Defense-Attack Training with Reinforcement Learning for Multimodal Model Security [63.41350337821108]
We propose Secure Tug-of-War (SecTOW) to enhance the security of multimodal large language models (MLLMs)<n>SecTOW consists of two modules: a defender and an auxiliary attacker, both trained iteratively using reinforcement learning (GRPO)<n>We show that SecTOW significantly improves security while preserving general performance.
arXiv Detail & Related papers (2025-07-29T17:39:48Z) - LLM Agents Should Employ Security Principles [60.03651084139836]
This paper argues that the well-established design principles in information security should be employed when deploying Large Language Model (LLM) agents at scale.<n>We introduce AgentSandbox, a conceptual framework embedding these security principles to provide safeguards throughout an agent's life-cycle.
arXiv Detail & Related papers (2025-05-29T21:39:08Z) - Enhancing Cloud Security through Topic Modelling [0.6117371161379209]
This research explores the application of Natural Language Processing (NLP) techniques to analyse security-related text data and anticipate potential threats.<n>We focus on Latent Dirichlet Allocation (LDA) and Probabilistic Latent Semantic Analysis (PLSA) to extract meaningful patterns from data sources.
arXiv Detail & Related papers (2025-05-01T19:17:20Z) - A Comprehensive Study of LLM Secure Code Generation [19.82291066720634]
Previous research primarily relies on a single static analyzer, CodeQL, to detect vulnerabilities in generated code.<n>We apply both security inspection and functionality validation to the same generated code and evaluate these two aspects together.<n>Our study reveals that existing techniques often compromise the functionality of generated code to enhance security.
arXiv Detail & Related papers (2025-03-18T20:12:50Z) - Tit-for-Tat: Safeguarding Large Vision-Language Models Against Jailbreak Attacks via Adversarial Defense [90.71884758066042]
Large vision-language models (LVLMs) introduce a unique vulnerability: susceptibility to malicious attacks via visual inputs.<n>We propose ESIII (Embedding Security Instructions Into Images), a novel methodology for transforming the visual space from a source of vulnerability into an active defense mechanism.
arXiv Detail & Related papers (2025-03-14T17:39:45Z) - Is Your AI-Generated Code Really Safe? Evaluating Large Language Models on Secure Code Generation with CodeSecEval [20.959848710829878]
Large language models (LLMs) have brought significant advancements to code generation and code repair.
However, their training using unsanitized data from open-source repositories, like GitHub, raises the risk of inadvertently propagating security vulnerabilities.
We aim to present a comprehensive study aimed at precisely evaluating and enhancing the security aspects of code LLMs.
arXiv Detail & Related papers (2024-07-02T16:13:21Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - A Novel Approach to Identify Security Controls in Source Code [4.598579706242066]
This paper enumerates a comprehensive list of commonly used security controls and creates a dataset for each one of them.
It uses the state-of-the-art NLP technique Bidirectional Representations from Transformers (BERT) and the Tactic Detector from our prior work to show that security controls could be identified with high confidence.
arXiv Detail & Related papers (2023-07-10T21:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.