Continuous risk assessment in secure DevOps
- URL: http://arxiv.org/abs/2409.03405v1
- Date: Thu, 5 Sep 2024 10:42:27 GMT
- Title: Continuous risk assessment in secure DevOps
- Authors: Ricardo M. Czekster,
- Abstract summary: We argue how secure DevOps could profit from engaging with risk related activities within organisations.
We focus on combining Risk Assessment (RA), particularly Threat Modelling (TM) and apply security considerations early in the software life-cycle.
- Score: 0.24475591916185502
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: DevOps (development and operations), has significantly changed the way to overcome deficiencies for delivering high-quality software to production environments. Past years witnessed an increased interest in embedding DevOps with cybersecurity in an approach dubbed secure DevOps. However, as the practices and guidance mature, teams must consider them within a broader risk context. We argue here how secure DevOps could profit from engaging with risk related activities within organisations. We focus on combining Risk Assessment (RA), particularly Threat Modelling (TM) and apply security considerations early in the software life-cycle. Our contribution provides a roadmap for enacting secure DevOps alongside risk objectives, devising informed ways to improve TM and establishing effective security underpinnings in organisations focusing on software products and services. We aim to outline proven methods over the literature on the subject discussing case studies, technologies, and tools. It presents a case study for a real-world inspired organisation employing the proposed approach with a discussion. Enforcing these novel mechanisms centred on security requires investment, training, and stakeholder engagement. It requires understanding the actual benefits of automation in light of Continuous Integration/Continuous Delivery settings that improve the overall quality of software solutions reaching the market.
Related papers
- Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - AI for DevSecOps: A Landscape and Future Opportunities [6.513361705307775]
DevSecOps has emerged as one of the most rapidly evolving software development paradigms.
With the growing concerns surrounding security in software systems, the DevSecOps paradigm has gained prominence.
Integrating security into the DevOps workflow can impact agility and impede delivery speed.
arXiv Detail & Related papers (2024-04-07T07:24:58Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - Welcome Your New AI Teammate: On Safety Analysis by Leashing Large Language Models [0.6699222582814232]
"Hazard Analysis & Risk Assessment" (HARA) is an essential step to start the safety requirements specification.
We propose a framework to support a higher degree of automation of HARA with Large Language Models (LLMs)
arXiv Detail & Related papers (2024-03-14T16:56:52Z) - Automated Security Findings Management: A Case Study in Industrial
DevOps [3.7798600249187295]
We propose a methodology for the management of security findings in industrial DevOps projects.
As an instance of the methodology, we developed the Security Flama, a semantic knowledge base for the automated management of security findings.
arXiv Detail & Related papers (2024-01-12T14:35:51Z) - An Introduction to Adaptive Software Security [0.0]
This paper presents an innovative approach integrating the MAPE-K loop and the Software Development Life Cycle (SDLC)
It proactively embeds security policies throughout development, reducing vulnerabilities from different levels of software engineering.
arXiv Detail & Related papers (2023-12-28T20:53:11Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Empowered and Embedded: Ethics and Agile Processes [60.63670249088117]
We argue that ethical considerations need to be embedded into the (agile) software development process.
We put emphasis on the possibility to implement ethical deliberations in already existing and well established agile software development processes.
arXiv Detail & Related papers (2021-07-15T11:14:03Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.