Towards Deep Learning Enabled Cybersecurity Risk Assessment for Microservice Architectures
- URL: http://arxiv.org/abs/2403.15169v1
- Date: Fri, 22 Mar 2024 12:42:33 GMT
- Title: Towards Deep Learning Enabled Cybersecurity Risk Assessment for Microservice Architectures
- Authors: Majid Abdulsatar, Hussain Ahmad, Diksha Goel, Faheem Ullah,
- Abstract summary: CyberWise Predictor is a framework designed for predicting and assessing security risks associated with microservice architectures.
Our framework achieves an average accuracy of 92% in automatically predicting vulnerability metrics for new vulnerabilities.
- Score: 3.0936354370614607
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of microservice architectures has given rise to a new set of software security challenges. These challenges stem from the unique features inherent in microservices. It is important to systematically assess and address software security challenges such as software security risk assessment. However, existing approaches prove inefficient in accurately evaluating the security risks associated with microservice architectures. To address this issue, we propose CyberWise Predictor, a framework designed for predicting and assessing security risks associated with microservice architectures. Our framework employs deep learning-based natural language processing models to analyze vulnerability descriptions for predicting vulnerability metrics to assess security risks. Our experimental evaluation shows the effectiveness of CyberWise Predictor, achieving an average accuracy of 92% in automatically predicting vulnerability metrics for new vulnerabilities. Our framework and findings serve as a guide for software developers to identify and mitigate security risks in microservice architectures.
Related papers
- A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities [0.29998889086656577]
The relentless process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals.
We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK.
Our results show an average 71.5% - 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors.
arXiv Detail & Related papers (2024-06-09T23:29:12Z) - Securing the Open RAN Infrastructure: Exploring Vulnerabilities in Kubernetes Deployments [60.51751612363882]
We investigate the security implications of and software-based Open Radio Access Network (RAN) systems.
We highlight the presence of potential vulnerabilities and misconfigurations in the infrastructure supporting the Near Real-Time RAN Controller (RIC) cluster.
arXiv Detail & Related papers (2024-05-03T07:18:45Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - Do Software Security Practices Yield Fewer Vulnerabilities? [6.6840472845873276]
The goal of this study is to assist practitioners and researchers making informed decisions on which security practices to adopt.
Four security practices were the most important practices influencing vulnerability count.
The number of reported vulnerabilities increased rather than reduced as the aggregate security score of the packages increased.
arXiv Detail & Related papers (2022-10-20T20:04:02Z) - Towards an Improved Understanding of Software Vulnerability Assessment
Using Data-Driven Approaches [0.0]
The thesis advances the field of software security by providing knowledge and automation support for software vulnerability assessment.
The key contributions include a systematisation of knowledge, along with a suite of novel data-driven techniques.
arXiv Detail & Related papers (2022-07-24T10:22:28Z) - Automated Security Assessment for the Internet of Things [6.690766107366799]
We propose an automated security assessment framework for IoT networks.
Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions.
This security model automatically assesses the security of the IoT network by capturing potential attack paths.
arXiv Detail & Related papers (2021-09-09T04:42:24Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.