Modeling Multivariate Cyber Risks: Deep Learning Dating Extreme Value
Theory
- URL: http://arxiv.org/abs/2103.08450v1
- Date: Mon, 15 Mar 2021 15:18:53 GMT
- Title: Modeling Multivariate Cyber Risks: Deep Learning Dating Extreme Value
Theory
- Authors: Mingyue Zhang Wu, Jinzhu Luo, Xing Fang, Maochao Xu, Peng Zhao
- Abstract summary: The proposed model enjoys the high accurate point predictions via deep learning and high quantile prediction via extreme value theory.
The empirical evidence based on real honeypot attack data also shows that the proposed model has very satisfactory prediction performances.
- Score: 6.451038884092264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling cyber risks has been an important but challenging task in the domain
of cyber security. It is mainly because of the high dimensionality and heavy
tails of risk patterns. Those obstacles have hindered the development of
statistical modeling of the multivariate cyber risks. In this work, we propose
a novel approach for modeling the multivariate cyber risks which relies on the
deep learning and extreme value theory. The proposed model not only enjoys the
high accurate point predictions via deep learning but also can provide the
satisfactory high quantile prediction via extreme value theory. The simulation
study shows that the proposed model can model the multivariate cyber risks very
well and provide satisfactory prediction performances. The empirical evidence
based on real honeypot attack data also shows that the proposed model has very
satisfactory prediction performances.
Related papers
- Cyber Risk Taxonomies: Statistical Analysis of Cybersecurity Risk Classifications [0.0]
We argue in favour of switching the attention from goodness-of-fit and in-sample performance, to focusing on the out-of sample forecasting performance.
Our results indicate that business motivated cyber risk classifications appear to be too restrictive and not flexible enough to capture the heterogeneity of cyber risk events.
arXiv Detail & Related papers (2024-10-04T04:12:34Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - DeRisk: An Effective Deep Learning Framework for Credit Risk Prediction
over Real-World Financial Data [13.480823015283574]
We propose DeRisk, an effective deep learning risk prediction framework for credit risk prediction on real-world financial data.
DeRisk is the first deep risk prediction model that outperforms statistical learning approaches deployed in our company's production system.
arXiv Detail & Related papers (2023-08-07T16:22:59Z) - Autoregressive Uncertainty Modeling for 3D Bounding Box Prediction [63.3021778885906]
3D bounding boxes are a widespread intermediate representation in many computer vision applications.
We propose methods for leveraging our autoregressive model to make high confidence predictions and meaningful uncertainty measures.
We release a simulated dataset, COB-3D, which highlights new types of ambiguity that arise in real-world robotics applications.
arXiv Detail & Related papers (2022-10-13T23:57:40Z) - Statistical Modeling of Data Breach Risks: Time to Identification and
Notification [2.132096006921048]
We propose a novel approach to imputing the missing data, and further develop a dependence model to capture the complex pattern exhibited by those two metrics.
The empirical study shows that the proposed approach has a satisfactory predictive performance and is superior to other commonly used models.
arXiv Detail & Related papers (2022-09-15T14:08:23Z) - Predictive Multiplicity in Probabilistic Classification [25.111463701666864]
We present a framework for measuring predictive multiplicity in probabilistic classification.
We demonstrate the incidence and prevalence of predictive multiplicity in real-world tasks.
Our results emphasize the need to report predictive multiplicity more widely.
arXiv Detail & Related papers (2022-06-02T16:25:29Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Black-box Adversarial Attacks on Network-wide Multi-step Traffic State
Prediction Models [4.353029347463806]
We propose an adversarial attack framework by treating the prediction model as a black-box.
The adversary can oracle the prediction model with any input and obtain corresponding output.
To test the attack effectiveness, two state of the art, graph neural network-based models (GCGRNN and DCRNN) are examined.
arXiv Detail & Related papers (2021-10-17T03:45:35Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.