Data Driven Approaches to Cybersecurity Governance for Board Decision-Making -- A Systematic Review
- URL: http://arxiv.org/abs/2311.17578v1
- Date: Wed, 29 Nov 2023 12:14:01 GMT
- Title: Data Driven Approaches to Cybersecurity Governance for Board Decision-Making -- A Systematic Review
- Authors: Anita Modi, Ievgeniia Kuzminykh, Bogdan Ghita,
- Abstract summary: This systematic literature review investigates the existing risk measurement instruments, cybersecurity metrics, and associated models for supporting BoDs.
The findings showed that, although sophisticated cybersecurity tools exist and are developing, there is limited information for Board of Directors to support them in terms of metrics and models to govern cybersecurity in a language they understand.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cybersecurity governance influences the quality of strategic decision-making to ensure cyber risks are managed effectively. Board of Directors are the decisions-makers held accountable for managing this risk; however, they lack adequate and efficient information necessary for making such decisions. In addition to the myriad of challenges they face, they are often insufficiently versed in the technology or cybersecurity terminology or not provided with the correct tools to support them to make sound decisions to govern cybersecurity effectively. A different approach is needed to ensure BoDs are clear on the approach the business is taking to build a cyber resilient organization. This systematic literature review investigates the existing risk measurement instruments, cybersecurity metrics, and associated models for supporting BoDs. We identified seven conceptual themes through literature analysis that form the basis of this study's main contribution. The findings showed that, although sophisticated cybersecurity tools exist and are developing, there is limited information for Board of Directors to support them in terms of metrics and models to govern cybersecurity in a language they understand. The review also provides some recommendations on theories and models that can be further investigated to provide support to Board of Directors.
Related papers
- SoK: Identifying Limitations and Bridging Gaps of Cybersecurity Capability Maturity Models (CCMMs) [1.2016264781280588]
Cybersecurity Capability Maturity Models ( CCMMs) emerge as pivotal tools in enhancing organisational cybersecurity posture.
CCMMs provide a structured framework to guide organisations in assessing their current cybersecurity capabilities, identifying critical gaps, and prioritising improvements.
However, the full potential of CCMMs is often not realised due to inherent limitations within the models and challenges encountered during their implementation and adoption processes.
arXiv Detail & Related papers (2024-08-28T21:00:20Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Threat-Informed Cyber Resilience Index: A Probabilistic Quantitative Approach to Measure Defence Effectiveness Against Cyber Attacks [0.36832029288386137]
This paper introduces the Cyber Resilience Index (CRI), a threat-informed probabilistic approach to quantifying an organisation's defence effectiveness against cyber-attacks (campaigns)
Building upon the Threat-Intelligence Based Security Assessment (TIBSA) methodology, we present a mathematical model that translates complex threat intelligence into an actionable, unified metric similar to a stock market index, that executives can understand and interact with while teams can act upon.
arXiv Detail & Related papers (2024-06-27T17:51:48Z) - ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - QBER: Quantifying Cyber Risks for Strategic Decisions [0.0]
We introduce QBER approach to offer decision-makers measurable risk metrics.
The QBER evaluates losses from cyberattacks, performs detailed risk analyses based on existing cybersecurity measures, and provides thorough cost assessments.
Our contributions involve outlining cyberattack probabilities and risks, identifying Technical, Economic, and Legal (TEL) impacts, creating a model to gauge impacts, suggesting risk mitigation strategies, and examining trends and challenges in implementing widespread Cyber Risk Quantification (CRQ)
arXiv Detail & Related papers (2024-05-06T14:25:58Z) - We need to aim at the top: Factors associated with cybersecurity awareness of cyber and information security decision-makers [0.0]
We study cybersecurity awareness of cyber and information security decision-makers.
Our findings indicate that awareness of well-known threats and solutions seems to be quite low for individuals in decision-making roles.
arXiv Detail & Related papers (2024-04-06T20:32:19Z) - Risk-reducing design and operations toolkit: 90 strategies for managing
risk and uncertainty in decision problems [65.268245109828]
This paper develops a catalog of such strategies and develops a framework for them.
It argues that they provide an efficient response to decision problems that are seemingly intractable due to high uncertainty.
It then proposes a framework to incorporate them into decision theory using multi-objective optimization.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Elicitation of SME Requirements for Cybersecurity Solutions by Studying
Adherence to Recommendations [1.138723572165938]
Small and medium-sized enterprises (SME) have become the weak spot of our economy for cyber attacks.
One of the reasons for why many SME do not adopt cybersecurity is that developers of cybersecurity solutions understand little the SME context.
This poster describes the challenges of SME regarding cybersecurity and introduces our proposed approach to elicit requirements for cybersecurity solutions.
arXiv Detail & Related papers (2020-07-16T08:36:40Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.