Software Vulnerability Management in the Era of Artificial Intelligence: An Industry Perspective
- URL: http://arxiv.org/abs/2512.18261v2
- Date: Tue, 23 Dec 2025 10:10:18 GMT
- Title: Software Vulnerability Management in the Era of Artificial Intelligence: An Industry Perspective
- Authors: M. Mehdi Kholoosi, Triet Huynh Minh Le, M. Ali Babar,
- Abstract summary: Our study aims to determine the extent of the adoption of AI-powered tools for Software Vulnerability Management.<n>We conducted a survey study involving 60 practitioners from diverse industry sectors across 27 countries.<n>Our findings indicate that AI-powered tools are used throughout the SVM life cycle, with 69% of users reporting satisfaction with their current use.
- Score: 1.0705399532413618
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) has revolutionized software development, particularly by automating repetitive tasks and improving developer productivity. While these advancements are well-documented, the use of AI-powered tools for Software Vulnerability Management (SVM), such as vulnerability detection and repair, remains underexplored in industry settings. To bridge this gap, our study aims to determine the extent of the adoption of AI-powered tools for SVM, identify barriers and facilitators to the use, and gather insights to help improve the tools to meet industry needs better. We conducted a survey study involving 60 practitioners from diverse industry sectors across 27 countries. The survey incorporates both quantitative and qualitative questions to analyze the adoption trends, assess tool strengths, identify practical challenges, and uncover opportunities for improvement. Our findings indicate that AI-powered tools are used throughout the SVM life cycle, with 69% of users reporting satisfaction with their current use. Practitioners value these tools for their speed, coverage, and accessibility. However, concerns about false positives, missing context, and trust issues remain prevalent. We observe a socio-technical adoption pattern in which AI outputs are filtered through human oversight and organizational governance. To support safe and effective use of AI for SVM, we recommend improvements in explainability, contextual awareness, integration workflows, and validation practices. We assert that these findings can offer practical guidance for practitioners, tool developers, and researchers seeking to enhance secure software development through the use of AI.
Related papers
- Adoption of Generative Artificial Intelligence in the German Software Engineering Industry: An Empirical Study [9.442926409509038]
Generative artificial intelligence (GenAI) tools have seen rapid adoption among software developers.<n>While adoption rates in the industry are rising, the underlying factors influencing the effective use of these tools have not been thoroughly investigated.<n>This issue is particularly relevant in environments with stringent regulatory requirements, such as Germany.<n>No empirical study has systematically examined the adoption dynamics of GenAI tools within the German context.
arXiv Detail & Related papers (2026-01-23T12:42:33Z) - Deploying AI for Signal Processing education: Selected challenges and intriguing opportunities [44.18936398140735]
Article explores the use of AI tools to facilitate and enhance education.<n> Primers are provided on several core technical issues that arise when using AI in educational settings.<n>The article serves as a resource for researchers and educators seeking to advance AI's role in engineering education.
arXiv Detail & Related papers (2025-09-10T19:19:26Z) - Explainability as a Compliance Requirement: What Regulated Industries Need from AI Tools for Design Artifact Generation [0.7874708385247352]
We investigate the explainability gap in AI-driven design artifact generation through semistructured interviews with ten practitioners from safety-critical industries.<n>Our findings reveal that non-explainable AI outputs necessitate extensive manual validation, reduce stakeholder trust, struggle to handle domain-specific terminology, disrupt team collaboration, and introduce regulatory compliance risks.<n>This study outlines a practical roadmap for improving the transparency, reliability, and applicability of AI tools in requirements engineering.
arXiv Detail & Related papers (2025-07-12T09:34:39Z) - What Challenges Do Developers Face When Using Verification-Aware Programming Languages? [43.72088093637808]
In software development, increasing software reliability often involves testing.<n>For complex and critical systems, developers can use Design by Contract (DbC) methods to define precise specifications that software components must satisfy.<n> Verification-Aware (VA) programming languages support DbC and formal verification at compile-time or run-time, offering stronger correctness guarantees than traditional testing.
arXiv Detail & Related papers (2025-06-30T10:17:39Z) - AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement [73.0700818105842]
We introduce AISafetyLab, a unified framework and toolkit that integrates representative attack, defense, and evaluation methodologies for AI safety.<n> AISafetyLab features an intuitive interface that enables developers to seamlessly apply various techniques.<n>We conduct empirical studies on Vicuna, analyzing different attack and defense strategies to provide valuable insights into their comparative effectiveness.
arXiv Detail & Related papers (2025-02-24T02:11:52Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.<n>Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.<n>We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Bridging the Communication Gap: Evaluating AI Labeling Practices for Trustworthy AI Development [41.64451715899638]
High-level AI labels, inspired by frameworks like EU energy labels, have been proposed to make the properties of AI models more transparent.<n>This study evaluates AI labeling through qualitative interviews along four key research questions.
arXiv Detail & Related papers (2025-01-21T06:00:14Z) - Dear Diary: A randomized controlled trial of Generative AI coding tools in the workplace [2.5280615594444567]
Generative AI coding tools are relatively new, and their impact on developers extends beyond traditional coding metrics.
This study aims to illuminate developers' preexisting beliefs about generative AI tools, their self perceptions, and how regular use of these tools may alter these beliefs.
Our findings reveal that the introduction and sustained use of generative AI coding tools significantly increases developers' perceptions of these tools as both useful and enjoyable.
arXiv Detail & Related papers (2024-10-24T00:07:27Z) - "I Don't Use AI for Everything": Exploring Utility, Attitude, and Responsibility of AI-empowered Tools in Software Development [19.851794567529286]
This study investigates the adoption, impact, and security considerations of AI-empowered tools in the software development process.
Our findings reveal widespread adoption of AI tools across various stages of software development.
arXiv Detail & Related papers (2024-09-20T09:17:10Z) - Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling [1.841662059101602]
We compare the current ecosystem of AI audit tooling to practitioner needs.<n>While many tools are designed to help set standards and evaluate AI systems, they often fall short in supporting accountability.<n>We conclude that the available resources do not currently support the full scope of AI audit practitioners' needs.
arXiv Detail & Related papers (2024-02-27T19:52:54Z) - Harnessing the Computing Continuum across Personalized Healthcare, Maintenance and Inspection, and Farming 4.0 [37.03658877613283]
The AI-SPRINT project focuses on the development and implementation of AI applications across the computing continuum.
This paper provides an in-depth examination of applications -- Personalized Healthcare, Maintenance and Inspection, and Farming 4.0.
We analyze how the proposed toolchain effectively addresses a range of challenges and refines processes, discussing its relevance and impact in multiple domains.
arXiv Detail & Related papers (2024-02-23T09:20:34Z) - Using Machine Learning To Identify Software Weaknesses From Software
Requirement Specifications [49.1574468325115]
This research focuses on finding an efficient machine learning algorithm to identify software weaknesses from requirement specifications.
Keywords extracted using latent semantic analysis help map the CWE categories to PROMISE_exp. Naive Bayes, support vector machine (SVM), decision trees, neural network, and convolutional neural network (CNN) algorithms were tested.
arXiv Detail & Related papers (2023-08-10T13:19:10Z) - AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities
and Challenges [60.56413461109281]
Artificial Intelligence for IT operations (AIOps) aims to combine the power of AI with the big data generated by IT Operations processes.
We discuss in depth the key types of data emitted by IT Operations activities, the scale and challenges in analyzing them, and where they can be helpful.
We categorize the key AIOps tasks as - incident detection, failure prediction, root cause analysis and automated actions.
arXiv Detail & Related papers (2023-04-10T15:38:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.