Towards Deep Learning Enabled Cybersecurity Risk Assessment for Microservice Architectures
- URL: http://arxiv.org/abs/2403.15169v1
- Date: Fri, 22 Mar 2024 12:42:33 GMT
- Title: Towards Deep Learning Enabled Cybersecurity Risk Assessment for Microservice Architectures
- Authors: Majid Abdulsatar, Hussain Ahmad, Diksha Goel, Faheem Ullah,
- Abstract summary: CyberWise Predictor is a framework designed for predicting and assessing security risks associated with microservice architectures.
Our framework achieves an average accuracy of 92% in automatically predicting vulnerability metrics for new vulnerabilities.
- Score: 3.0936354370614607
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of microservice architectures has given rise to a new set of software security challenges. These challenges stem from the unique features inherent in microservices. It is important to systematically assess and address software security challenges such as software security risk assessment. However, existing approaches prove inefficient in accurately evaluating the security risks associated with microservice architectures. To address this issue, we propose CyberWise Predictor, a framework designed for predicting and assessing security risks associated with microservice architectures. Our framework employs deep learning-based natural language processing models to analyze vulnerability descriptions for predicting vulnerability metrics to assess security risks. Our experimental evaluation shows the effectiveness of CyberWise Predictor, achieving an average accuracy of 92% in automatically predicting vulnerability metrics for new vulnerabilities. Our framework and findings serve as a guide for software developers to identify and mitigate security risks in microservice architectures.
Related papers
- SafeBench: A Safety Evaluation Framework for Multimodal Large Language Models [75.67623347512368]
We propose toolns, a comprehensive framework designed for conducting safety evaluations of MLLMs.
Our framework consists of a comprehensive harmful query dataset and an automated evaluation protocol.
Based on our framework, we conducted large-scale experiments on 15 widely-used open-source MLLMs and 6 commercial MLLMs.
arXiv Detail & Related papers (2024-10-24T17:14:40Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Microservice Vulnerability Analysis: A Literature Review with Empirical Insights [2.883578416080909]
We identify, analyze, and report 126 security vulnerabilities inherent in microservice architectures.
This comprehensive analysis enables us to (i) propose a taxonomy that categorizes microservice vulnerabilities based on the distinctive features of microservice architectures.
We also conduct an empirical analysis by performing vulnerability scans on four diverse microservice benchmark applications.
arXiv Detail & Related papers (2024-07-31T08:13:42Z) - A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities [0.29998889086656577]
The relentless process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals.
We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK.
Our results show an average 71.5% - 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors.
arXiv Detail & Related papers (2024-06-09T23:29:12Z) - Securing the Open RAN Infrastructure: Exploring Vulnerabilities in Kubernetes Deployments [60.51751612363882]
We investigate the security implications of and software-based Open Radio Access Network (RAN) systems.
We highlight the presence of potential vulnerabilities and misconfigurations in the infrastructure supporting the Near Real-Time RAN Controller (RIC) cluster.
arXiv Detail & Related papers (2024-05-03T07:18:45Z) - Unveiling the Misuse Potential of Base Large Language Models via In-Context Learning [61.2224355547598]
Open-sourcing of large language models (LLMs) accelerates application development, innovation, and scientific progress.
Our investigation exposes a critical oversight in this belief.
By deploying carefully designed demonstrations, our research demonstrates that base LLMs could effectively interpret and execute malicious instructions.
arXiv Detail & Related papers (2024-04-16T13:22:54Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - Towards an Improved Understanding of Software Vulnerability Assessment
Using Data-Driven Approaches [0.0]
The thesis advances the field of software security by providing knowledge and automation support for software vulnerability assessment.
The key contributions include a systematisation of knowledge, along with a suite of novel data-driven techniques.
arXiv Detail & Related papers (2022-07-24T10:22:28Z) - Automated Security Assessment for the Internet of Things [6.690766107366799]
We propose an automated security assessment framework for IoT networks.
Our framework first leverages machine learning and natural language processing to analyze vulnerability descriptions.
This security model automatically assesses the security of the IoT network by capturing potential attack paths.
arXiv Detail & Related papers (2021-09-09T04:42:24Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.