Resilient Cloud cluster with DevSecOps security model, automates a data analysis, vulnerability search and risk calculation
- URL: http://arxiv.org/abs/2412.16190v1
- Date: Sun, 15 Dec 2024 13:11:48 GMT
- Title: Resilient Cloud cluster with DevSecOps security model, automates a data analysis, vulnerability search and risk calculation
- Authors: Abed Saif Ahmed Alghawli, Tamara Radivilova,
- Abstract summary: The article presents the main methods of deploying web applications, ways to increase the level of information security at all stages of product development.<n>The cloud cluster was deployed using Terraform and the Jenkins pipeline, which checks program code for vulnerabilities.<n>The algorithm for calculating risk and losses is based on statistical data and the concept of the FAIR information risk assessment methodology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated, secure software development is an important task of digitalization, which is solved with the DevSecOps approach. An important part of the DevSecOps approach is continuous risk assessment, which is necessary to identify and evaluate risk factors. Combining the development cycle with continuous risk assessment creates synergies in software development and operation and minimizes vulnerabilities. The article presents the main methods of deploying web applications, ways to increase the level of information security at all stages of product development, compares different types of infrastructures and cloud computing providers, and analyzes modern tools used to automate processes. The cloud cluster was deployed using Terraform and the Jenkins pipeline, which is written in the Groovy programming language, which checks program code for vulnerabilities and allows you to fix violations at the earliest stages of developing secure web applications. The developed cluster implements the proposed algorithm for automated risk assessment based on the calculation (modeling) of threats and vulnerabilities of cloud infrastructure, which operates in real time, periodically collecting all information and adjusting the system in accordance with the risk and applied controls. The algorithm for calculating risk and losses is based on statistical data and the concept of the FAIR information risk assessment methodology. The risk value obtained using the proposed method is quantitative, which allows more efficient forecasting of information security costs in software development.
Related papers
- strideSEA: A STRIDE-centric Security Evaluation Approach [1.996354642790599]
strideSEA integrates STRIDE as the central classification scheme into the security activities of threat modeling, attack scenario analysis, risk analysis, and countermeasure recommendation.
The application of strideSEA is demonstrated in a real-world online immunization system case study.
arXiv Detail & Related papers (2025-03-24T18:00:17Z) - A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments [55.60375624503877]
Model Extraction Attacks (MEAs) threaten modern machine learning systems by enabling adversaries to steal models, exposing intellectual property and training data.
This survey is motivated by the urgent need to understand how the unique characteristics of cloud, edge, and federated deployments shape attack vectors and defense requirements.
We systematically examine the evolution of attack methodologies and defense mechanisms across these environments, demonstrating how environmental factors influence security strategies in critical sectors such as autonomous vehicles, healthcare, and financial services.
arXiv Detail & Related papers (2025-02-22T03:46:50Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.
The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - Computational Safety for Generative AI: A Signal Processing Perspective [65.268245109828]
computational safety is a mathematical framework that enables the quantitative assessment, formulation, and study of safety challenges in GenAI.
We show how sensitivity analysis and loss landscape analysis can be used to detect malicious prompts with jailbreak attempts.
We discuss key open research challenges, opportunities, and the essential role of signal processing in computational AI safety.
arXiv Detail & Related papers (2025-02-18T02:26:50Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Securing the Open RAN Infrastructure: Exploring Vulnerabilities in Kubernetes Deployments [60.51751612363882]
We investigate the security implications of and software-based Open Radio Access Network (RAN) systems.
We highlight the presence of potential vulnerabilities and misconfigurations in the infrastructure supporting the Near Real-Time RAN Controller (RIC) cluster.
arXiv Detail & Related papers (2024-05-03T07:18:45Z) - Towards Deep Learning Enabled Cybersecurity Risk Assessment for Microservice Architectures [3.0936354370614607]
CyberWise Predictor is a framework designed for predicting and assessing security risks associated with microservice architectures.
Our framework achieves an average accuracy of 92% in automatically predicting vulnerability metrics for new vulnerabilities.
arXiv Detail & Related papers (2024-03-22T12:42:33Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - Software Repositories and Machine Learning Research in Cyber Security [0.0]
The integration of robust cyber security defenses has become essential across all phases of software development.
Attempts have been made to leverage topic modeling and machine learning for the detection of these early-stage vulnerabilities in the software requirements process.
arXiv Detail & Related papers (2023-11-01T17:46:07Z) - Model evaluation for extreme risks [46.53170857607407]
Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills.
We explain why model evaluation is critical for addressing extreme risks.
arXiv Detail & Related papers (2023-05-24T16:38:43Z) - Towards an Improved Understanding of Software Vulnerability Assessment
Using Data-Driven Approaches [0.0]
The thesis advances the field of software security by providing knowledge and automation support for software vulnerability assessment.
The key contributions include a systematisation of knowledge, along with a suite of novel data-driven techniques.
arXiv Detail & Related papers (2022-07-24T10:22:28Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.