Using POMDP-based Approach to Address Uncertainty-Aware Adaptation for
Self-Protecting Software
- URL: http://arxiv.org/abs/2308.02134v2
- Date: Wed, 9 Aug 2023 14:34:46 GMT
- Title: Using POMDP-based Approach to Address Uncertainty-Aware Adaptation for
Self-Protecting Software
- Authors: Ryan Liu, Ladan Tahvildari
- Abstract summary: Moving Target Defense (MTD) changes software characteristics to make it harder for attackers to exploit vulnerabilities.
Existing MTD decision-making solutions have neglected uncertainty in model parameters and lack self-adaptation.
This paper proposes an uncertainty-aware and self-adaptive MTD decision engine based on Partially Observable Markov Decision Process and Bayesian Learning techniques.
- Score: 4.459996749171579
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The threats posed by evolving cyberattacks have led to increased research
related to software systems that can self-protect. One topic in this domain is
Moving Target Defense (MTD), which changes software characteristics in the
protected system to make it harder for attackers to exploit vulnerabilities.
However, MTD implementation and deployment are often impacted by run-time
uncertainties, and existing MTD decision-making solutions have neglected
uncertainty in model parameters and lack self-adaptation. This paper aims to
address this gap by proposing an approach for an uncertainty-aware and
self-adaptive MTD decision engine based on Partially Observable Markov Decision
Process and Bayesian Learning techniques. The proposed approach considers
uncertainty in both state and model parameters; thus, it has the potential to
better capture environmental variability and improve defense strategies. A
preliminary study is presented to highlight the potential effectiveness and
challenges of the proposed approach.
Related papers
- TTP-Based Cyber Resilience Index: A Probabilistic Quantitative Approach to Measure Defence Effectiveness Against Cyber Attacks [0.36832029288386137]
This paper introduces the Cyber Resilience Index (CRI), a TTP-based probabilistic approach to quantifying an organisation's defence effectiveness against cyber-attacks (campaigns)
We present a mathematical model that translates complex threat intelligence into an actionable, unified metric similar to a stock market index, that executives can understand and interact with while teams can act upon.
arXiv Detail & Related papers (2024-06-27T17:51:48Z) - Dynamic Vulnerability Criticality Calculator for Industrial Control Systems [0.0]
This paper introduces an innovative approach by proposing a dynamic vulnerability criticality calculator.
Our methodology encompasses the analysis of environmental topology and the effectiveness of deployed security mechanisms.
Our approach integrates these factors into a comprehensive Fuzzy Cognitive Map model, incorporating attack paths to holistically assess the overall vulnerability score.
arXiv Detail & Related papers (2024-03-20T09:48:47Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Risk-reducing design and operations toolkit: 90 strategies for managing
risk and uncertainty in decision problems [65.268245109828]
This paper develops a catalog of such strategies and develops a framework for them.
It argues that they provide an efficient response to decision problems that are seemingly intractable due to high uncertainty.
It then proposes a framework to incorporate them into decision theory using multi-objective optimization.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - Meta-Learning Priors for Safe Bayesian Optimization [72.8349503901712]
We build on a meta-learning algorithm, F-PACOH, capable of providing reliable uncertainty quantification in settings of data scarcity.
As core contribution, we develop a novel framework for choosing safety-compliant priors in a data-riven manner.
On benchmark functions and a high-precision motion system, we demonstrate that our meta-learned priors accelerate the convergence of safe BO approaches.
arXiv Detail & Related papers (2022-10-03T08:38:38Z) - Deep VULMAN: A Deep Reinforcement Learning-Enabled Cyber Vulnerability
Management Framework [4.685954926214926]
Cyber vulnerability management is a critical function of a cybersecurity operations center (CSOC) that helps protect organizations against cyber-attacks on their computer and network systems.
The current approaches are deterministic and one-time decision-making methods, which do not consider future uncertainties when prioritizing and selecting vulnerabilities for mitigation.
We propose a novel framework, Deep VULMAN, consisting of a deep reinforcement learning agent and an integer programming method to fill this gap in the cyber vulnerability management process.
arXiv Detail & Related papers (2022-08-03T22:32:48Z) - Reinforcement Learning with a Terminator [80.34572413850186]
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret.
arXiv Detail & Related papers (2022-05-30T18:40:28Z) - Towards Assessing and Characterizing the Semantic Robustness of Face
Recognition [55.258476405537344]
Face Recognition Models (FRMs) based on Deep Neural Networks (DNNs) inherit this vulnerability.
We propose a methodology for assessing and characterizing the robustness of FRMs against semantic perturbations to their input.
arXiv Detail & Related papers (2022-02-10T12:22:09Z) - Lyapunov-based uncertainty-aware safe reinforcement learning [0.0]
InReinforcement learning (RL) has shown a promising performance in learning optimal policies for a variety of sequential decision-making tasks.
In many real-world RL problems, besides optimizing the main objectives, the agent is expected to satisfy a certain level of safety.
We propose a Lyapunov-based uncertainty-aware safe RL model to address these limitations.
arXiv Detail & Related papers (2021-07-29T13:08:15Z) - An Offline Risk-aware Policy Selection Method for Bayesian Markov
Decision Processes [0.0]
Exploitation vs Caution (EvC) is a paradigm that elegantly incorporates model uncertainty abiding by the Bayesian formalism.
We validate EvC with state-of-the-art approaches in different discrete, yet simple, environments offering a fair variety of MDP classes.
In the tested scenarios EvC manages to select robust policies and hence stands out as a useful tool for practitioners.
arXiv Detail & Related papers (2021-05-27T20:12:20Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.