Dynamic cyber risk estimation with Competitive Quantile Autoregression
- URL: http://arxiv.org/abs/2101.10893v1
- Date: Mon, 25 Jan 2021 16:52:27 GMT
- Title: Dynamic cyber risk estimation with Competitive Quantile Autoregression
- Authors: Raisa Dzhamtyrova and Carsten Maple
- Abstract summary: An effective risk framework has the potential to predict, assess, and mitigate possible adverse events.
We propose two methods for modelling Value-at-Risk (VaR) which can be used for any time-series data.
We show that these methods can predict the size and inter-arrival time of cyber hacking breaches by running coverage tests.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cyber risk estimation is an essential part of any information technology
system's design and governance since the cost of the system compromise could be
catastrophic. An effective risk framework has the potential to predict, assess,
and mitigate possible adverse events. We propose two methods for modelling
Value-at-Risk (VaR) which can be used for any time-series data. The first
approach is based on Quantile Autoregression (QAR), which can estimate VaR for
different quantiles, i.e. confidence levels. The second method, called
Competitive Quantile Autoregression (CQAR), dynamically re-estimates cyber risk
as soon as new data becomes available. This method provides a theoretical
guarantee that it asymptotically performs as well as any QAR at any time point
in the future. We show that these methods can predict the size and
inter-arrival time of cyber hacking breaches by running coverage tests. The
proposed approaches allow to model a separate stochastic process for each
significance level and therefore provide more flexibility compared to
previously proposed techniques. We provide a fully reproducible code used for
conducting the experiments.
Related papers
- A hierarchical approach for assessing the vulnerability of tree-based classification models to membership inference attack [0.552480439325792]
Machine learning models can inadvertently expose confidential properties of their training data, making them vulnerable to membership inference attacks (MIA)
This article presents two new complementary approaches for efficiently identifying vulnerable tree-based models.
arXiv Detail & Related papers (2025-02-13T15:16:53Z) - Sequential Manipulation Against Rank Aggregation: Theory and Algorithm [119.57122943187086]
We leverage an online attack on the vulnerable data collection process.
From the game-theoretic perspective, the confrontation scenario is formulated as a distributionally robust game.
The proposed method manipulates the results of rank aggregation methods in a sequential manner.
arXiv Detail & Related papers (2024-07-02T03:31:21Z) - Distribution-free risk assessment of regression-based machine learning
algorithms [6.507711025292814]
We focus on regression algorithms and the risk-assessment task of computing the probability of the true label lying inside an interval defined around the model's prediction.
We solve the risk-assessment problem using the conformal prediction approach, which provides prediction intervals that are guaranteed to contain the true label with a given probability.
arXiv Detail & Related papers (2023-10-05T13:57:24Z) - Learning Disturbances Online for Risk-Aware Control: Risk-Aware Flight
with Less Than One Minute of Data [33.7789991023177]
Recent advances in safety-critical risk-aware control are predicated on apriori knowledge of disturbances a system might face.
This paper proposes a method to efficiently learn these disturbances in a risk-aware online context.
arXiv Detail & Related papers (2022-12-12T21:40:23Z) - Risk-Averse No-Regret Learning in Online Convex Games [19.4481913405231]
We consider an online game with risk-averse agents whose goal is to learn optimal decisions that minimize the risk of incurring significantly high costs.
Since the distributions of the cost functions depend on the actions of all agents that are generally unobservable, the Conditional Value at Risk (CVaR) values of the costs are difficult to compute.
We propose a new online risk-averse learning algorithm that relies on one-point zeroth-order estimation of the CVaR gradients computed using CVaR values.
arXiv Detail & Related papers (2022-03-16T21:36:47Z) - A New Approach for Interpretability and Reliability in Clinical Risk
Prediction: Acute Coronary Syndrome Scenario [0.33927193323747895]
We intend to create a new risk assessment methodology that combines the best characteristics of both risk score and machine learning models.
The proposed approach achieved testing results identical to the standard LR, but offers superior interpretability and personalization.
The reliability estimation of individual predictions presented a great correlation with the misclassifications rate.
arXiv Detail & Related papers (2021-10-15T19:33:46Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.