SoK: Identifying Limitations and Bridging Gaps of Cybersecurity Capability Maturity Models (CCMMs)
- URL: http://arxiv.org/abs/2408.16140v1
- Date: Wed, 28 Aug 2024 21:00:20 GMT
- Title: SoK: Identifying Limitations and Bridging Gaps of Cybersecurity Capability Maturity Models (CCMMs)
- Authors: Lasini Liyanage, Nalin Asanka Gamagedara Arachchilage, Giovanni Russello,
- Abstract summary: Cybersecurity Capability Maturity Models ( CCMMs) emerge as pivotal tools in enhancing organisational cybersecurity posture.
CCMMs provide a structured framework to guide organisations in assessing their current cybersecurity capabilities, identifying critical gaps, and prioritising improvements.
However, the full potential of CCMMs is often not realised due to inherent limitations within the models and challenges encountered during their implementation and adoption processes.
- Score: 1.2016264781280588
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the rapidly evolving digital landscape, where organisations are increasingly vulnerable to cybersecurity threats, Cybersecurity Capability Maturity Models (CCMMs) emerge as pivotal tools in enhancing organisational cybersecurity posture. CCMMs provide a structured framework to guide organisations in assessing their current cybersecurity capabilities, identifying critical gaps, and prioritising improvements. However, the full potential of CCMMs is often not realised due to inherent limitations within the models and challenges encountered during their implementation and adoption processes. These limitations and challenges can significantly hamper the efficacy of CCMMs in improving cybersecurity. As a result, organisations remain vulnerable to cyber threats as they may fail to identify and address critical security gaps, implement necessary improvements or allocate resources effectively. To address these limitations and challenges, conducting a thorough investigation into existing models is essential. Therefore, we conducted a Systematic Literature Review (SLR) analysing 43 publications to identify existing CCMMs, their limitations, and the challenges organisations face when implementing and adopting them. By understanding these barriers, we aim to explore avenues for enhancing the efficacy of CCMMs, ensuring they more effectively meet the cybersecurity needs of organisational entities.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Confronting the Reproducibility Crisis: A Case Study of Challenges in Cybersecurity AI [0.0]
A key area in AI-based cybersecurity focuses on defending deep neural networks against malicious perturbations.
We attempt to validate results from prior work on certified robustness using the VeriGauge toolkit.
Our findings underscore the urgent need for standardized methodologies, containerization, and comprehensive documentation.
arXiv Detail & Related papers (2024-05-29T04:37:19Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Cybersecurity in Motion: A Survey of Challenges and Requirements for Future Test Facilities of CAVs [11.853500347907826]
Cooperative Intelligent Transportation Systems (C-ITSs) are at the forefront of this evolution.
This paper presents an envisaged Cybersecurity Centre of Excellence (CSCE) designed to bolster research, testing, and evaluation of the cybersecurity of C-ITSs.
arXiv Detail & Related papers (2023-12-22T13:42:53Z) - Purple Llama CyberSecEval: A Secure Coding Benchmark for Language Models [41.068780235482514]
This paper presents CyberSecEval, a comprehensive benchmark developed to help bolster the cybersecurity of Large Language Models (LLMs) employed as coding assistants.
CyberSecEval provides a thorough evaluation of LLMs in two crucial security domains: their propensity to generate insecure code and their level of compliance when asked to assist in cyberattacks.
arXiv Detail & Related papers (2023-12-07T22:07:54Z) - Data Driven Approaches to Cybersecurity Governance for Board Decision-Making -- A Systematic Review [0.0]
This systematic literature review investigates the existing risk measurement instruments, cybersecurity metrics, and associated models for supporting BoDs.
The findings showed that, although sophisticated cybersecurity tools exist and are developing, there is limited information for Board of Directors to support them in terms of metrics and models to govern cybersecurity in a language they understand.
arXiv Detail & Related papers (2023-11-29T12:14:01Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Deep VULMAN: A Deep Reinforcement Learning-Enabled Cyber Vulnerability
Management Framework [4.685954926214926]
Cyber vulnerability management is a critical function of a cybersecurity operations center (CSOC) that helps protect organizations against cyber-attacks on their computer and network systems.
The current approaches are deterministic and one-time decision-making methods, which do not consider future uncertainties when prioritizing and selecting vulnerabilities for mitigation.
We propose a novel framework, Deep VULMAN, consisting of a deep reinforcement learning agent and an integer programming method to fill this gap in the cyber vulnerability management process.
arXiv Detail & Related papers (2022-08-03T22:32:48Z) - Elicitation of SME Requirements for Cybersecurity Solutions by Studying
Adherence to Recommendations [1.138723572165938]
Small and medium-sized enterprises (SME) have become the weak spot of our economy for cyber attacks.
One of the reasons for why many SME do not adopt cybersecurity is that developers of cybersecurity solutions understand little the SME context.
This poster describes the challenges of SME regarding cybersecurity and introduces our proposed approach to elicit requirements for cybersecurity solutions.
arXiv Detail & Related papers (2020-07-16T08:36:40Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.