AI for DevSecOps: A Landscape and Future Opportunities
- URL: http://arxiv.org/abs/2404.04839v2
- Date: Fri, 13 Sep 2024 00:08:11 GMT
- Title: AI for DevSecOps: A Landscape and Future Opportunities
- Authors: Michael Fu, Jirat Pasuksmit, Chakkrit Tantithamthavorn,
- Abstract summary: DevSecOps has emerged as one of the most rapidly evolving software development paradigms.
With the growing concerns surrounding security in software systems, the DevSecOps paradigm has gained prominence.
Integrating security into the DevOps workflow can impact agility and impede delivery speed.
- Score: 6.513361705307775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: DevOps has emerged as one of the most rapidly evolving software development paradigms. With the growing concerns surrounding security in software systems, the DevSecOps paradigm has gained prominence, urging practitioners to incorporate security practices seamlessly into the DevOps workflow. However, integrating security into the DevOps workflow can impact agility and impede delivery speed. Recently, the advancement of artificial intelligence (AI) has revolutionized automation in various software domains, including software security. AI-driven security approaches, particularly those leveraging machine learning or deep learning, hold promise in automating security workflows. They reduce manual efforts, which can be integrated into DevOps to ensure uninterrupted delivery speed and align with the DevSecOps paradigm simultaneously. This paper seeks to contribute to the critical intersection of AI and DevSecOps by presenting a comprehensive landscape of AI-driven security techniques applicable to DevOps and identifying avenues for enhancing security, trust, and efficiency in software development processes. We analyzed 99 research papers spanning from 2017 to 2023. Specifically, we address two key research questions (RQs). In RQ1, we identified 12 security tasks associated with the DevSecOps process and reviewed existing AI-driven security approaches, the problems they addressed, and the 65 benchmarks used to evaluate those approaches. Drawing insights from our findings, in RQ2, we discussed state-of-the-art AI-driven security approaches, highlighted 15 challenges in existing research, and proposed 15 corresponding avenues for future opportunities.
Related papers
- Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - "I Don't Use AI for Everything": Exploring Utility, Attitude, and Responsibility of AI-empowered Tools in Software Development [19.851794567529286]
This study investigates the adoption, impact, and security considerations of AI-empowered tools in the software development process.
Our findings reveal widespread adoption of AI tools across various stages of software development.
arXiv Detail & Related papers (2024-09-20T09:17:10Z) - Continuous risk assessment in secure DevOps [0.24475591916185502]
We argue how secure DevOps could profit from engaging with risk related activities within organisations.
We focus on combining Risk Assessment (RA), particularly Threat Modelling (TM) and apply security considerations early in the software life-cycle.
arXiv Detail & Related papers (2024-09-05T10:42:27Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways [10.16690494897609]
An Artificial Intelligence (AI) agent is a software entity that autonomously performs tasks or makes decisions based on pre-defined objectives and data inputs.
This survey delves into the emerging security threats faced by AI agents, categorizing them into four critical knowledge gaps.
By systematically reviewing these threats, this paper highlights both the progress made and the existing limitations in safeguarding AI agents.
arXiv Detail & Related papers (2024-06-04T01:22:31Z) - Welcome Your New AI Teammate: On Safety Analysis by Leashing Large Language Models [0.6699222582814232]
"Hazard Analysis & Risk Assessment" (HARA) is an essential step to start the safety requirements specification.
We propose a framework to support a higher degree of automation of HARA with Large Language Models (LLMs)
arXiv Detail & Related papers (2024-03-14T16:56:52Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - TanksWorld: A Multi-Agent Environment for AI Safety Research [5.218815947097599]
The ability to create artificial intelligence capable of performing complex tasks is rapidly outpacing our ability to ensure the safe and assured operation of AI-enabled systems.
Recent simulation environments to illustrate AI safety risks are relatively simple or narrowly-focused on a particular issue.
We introduce the AI safety TanksWorld as an environment for AI safety research with three essential aspects.
arXiv Detail & Related papers (2020-02-25T21:00:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.