Comparative Analysis of AI-Driven Security Approaches in DevSecOps: Challenges, Solutions, and Future Directions
- URL: http://arxiv.org/abs/2504.19154v1
- Date: Sun, 27 Apr 2025 08:18:11 GMT
- Title: Comparative Analysis of AI-Driven Security Approaches in DevSecOps: Challenges, Solutions, and Future Directions
- Authors: Farid Binbeshr, Muhammad Imam,
- Abstract summary: This study conducts a systematic literature review to analyze and compare AI-driven security solutions in DevSecOps.<n>The findings reveal gaps in empirical validation, scalability, and integration of AI in security automation.<n>The study proposes future directions for optimizing AI-based security frameworks in DevSecOps.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The integration of security within DevOps, known as DevSecOps, has gained traction in modern software development to address security vulnerabilities while maintaining agility. Artificial Intelligence (AI) and Machine Learning (ML) have been increasingly leveraged to enhance security automation, threat detection, and compliance enforcement. However, existing studies primarily focus on individual aspects of AI-driven security in DevSecOps, lacking a structured comparison of methodologies. This study conducts a systematic literature review (SLR) to analyze and compare AI-driven security solutions in DevSecOps, evaluating their technical capabilities, implementation challenges, and operational impacts. The findings reveal gaps in empirical validation, scalability, and integration of AI in security automation. The study highlights best practices, identifies research gaps, and proposes future directions for optimizing AI-based security frameworks in DevSecOps.
Related papers
- AISafetyLab: A Comprehensive Framework for AI Safety Evaluation and Improvement [73.0700818105842]
We introduce AISafetyLab, a unified framework and toolkit that integrates representative attack, defense, and evaluation methodologies for AI safety.<n> AISafetyLab features an intuitive interface that enables developers to seamlessly apply various techniques.<n>We conduct empirical studies on Vicuna, analyzing different attack and defense strategies to provide valuable insights into their comparative effectiveness.
arXiv Detail & Related papers (2025-02-24T02:11:52Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Position: A taxonomy for reporting and describing AI security incidents [57.98317583163334]
We argue that specific are required to describe and report security incidents of AI systems.
Existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.<n>We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - AI for DevSecOps: A Landscape and Future Opportunities [6.513361705307775]
DevSecOps has emerged as one of the most rapidly evolving software development paradigms.
With the growing concerns surrounding security in software systems, the DevSecOps paradigm has gained prominence.
Integrating security into the DevOps workflow can impact agility and impede delivery speed.
arXiv Detail & Related papers (2024-04-07T07:24:58Z) - Quantifying AI Vulnerabilities: A Synthesis of Complexity, Dynamical Systems, and Game Theory [0.0]
We propose a novel approach that introduces three metrics: System Complexity Index (SCI), Lyapunov Exponent for AI Stability (LEAIS), and Nash Equilibrium Robustness (NER)
SCI quantifies the inherent complexity of an AI system, LEAIS captures its stability and sensitivity to perturbations, and NER evaluates its strategic robustness against adversarial manipulation.
arXiv Detail & Related papers (2024-04-07T07:05:59Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Cyber Security Requirements for Platforms Enhancing AI Reproducibility [0.0]
This study focuses on the field of artificial intelligence (AI) and introduces a new framework for evaluating AI platforms.
Five popular AI platforms; Floydhub, BEAT, Codalab, Kaggle, and OpenML were assessed.
The analysis revealed that none of these platforms fully incorporates the necessary cyber security measures.
arXiv Detail & Related papers (2023-09-27T09:43:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.