Predicting Resilience with Neural Networks
- URL: http://arxiv.org/abs/2308.06309v1
- Date: Fri, 11 Aug 2023 17:29:49 GMT
- Title: Predicting Resilience with Neural Networks
- Authors: Karen da Mata, Priscila Silva and Lance Fiondella
- Abstract summary: Resilience engineering studies the ability of a system to survive and recover from disruptive events.
This paper proposes three alternative neural network (NN) approaches to model and predict system performance.
- Score: 1.4425878137951238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Resilience engineering studies the ability of a system to survive and recover
from disruptive events, which finds applications in several domains. Most
studies emphasize resilience metrics to quantify system performance, whereas
recent studies propose statistical modeling approaches to project system
recovery time after degradation. Moreover, past studies are either performed on
data after recovering or limited to idealized trends. Therefore, this paper
proposes three alternative neural network (NN) approaches including (i)
Artificial Neural Networks, (ii) Recurrent Neural Networks, and (iii)
Long-Short Term Memory (LSTM) to model and predict system performance,
including negative and positive factors driving resilience to quantify the
impact of disruptive events and restorative activities. Goodness-of-fit
measures are computed to evaluate the models and compared with a classical
statistical model, including mean squared error and adjusted R squared. Our
results indicate that NN models outperformed the traditional model on all
goodness-of-fit measures. More specifically, LSTMs achieved an over 60\% higher
adjusted R squared, and decreased predictive error by 34-fold compared to the
traditional method. These results suggest that NN models to predict resilience
are both feasible and accurate and may find practical use in many important
domains.
Related papers
- Generalized Factor Neural Network Model for High-dimensional Regression [50.554377879576066]
We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.
Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
arXiv Detail & Related papers (2025-02-16T23:13:55Z) - Enhanced Spatiotemporal Prediction Using Physical-guided And Frequency-enhanced Recurrent Neural Networks [17.91230192726962]
This paper proposes a physical-guided neural network to estimate the Stemporal dynamics.
We also propose an adaptive second-order Runge-Kutta method with physical constraints to model the physical states more precisely.
Our model outperforms state-of-the-art methods and performs best in datasets, with a much smaller parameter count.
arXiv Detail & Related papers (2024-05-23T12:39:49Z) - How Inverse Conditional Flows Can Serve as a Substitute for Distributional Regression [2.9873759776815527]
We propose a framework for distributional regression using inverse flow transformations (DRIFT)
DRIFT covers both interpretable statistical models and flexible neural networks opening up new avenues in both statistical modeling and deep learning.
arXiv Detail & Related papers (2024-05-08T21:19:18Z) - Amortised Inference in Bayesian Neural Networks [0.0]
We introduce the Amortised Pseudo-Observation Variational Inference Bayesian Neural Network (APOVI-BNN)
We show that the amortised inference is of similar or better quality to those obtained through traditional variational inference.
We then discuss how the APOVI-BNN may be viewed as a new member of the neural process family.
arXiv Detail & Related papers (2023-09-06T14:02:33Z) - A New PHO-rmula for Improved Performance of Semi-Structured Networks [0.0]
We show that techniques to properly identify the contributions of the different model components in SSNs lead to suboptimal network estimation.
We propose a non-invasive post-hocization (PHO) that guarantees identifiability of model components and provides better estimation and prediction quality.
Our theoretical findings are supported by numerical experiments, a benchmark comparison as well as a real-world application to COVID-19 infections.
arXiv Detail & Related papers (2023-06-01T10:23:28Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - DeepBayes -- an estimator for parameter estimation in stochastic
nonlinear dynamical models [11.917949887615567]
We propose DeepBayes estimators that leverage the power of deep recurrent neural networks in learning an estimator.
The deep recurrent neural network architectures can be trained offline and ensure significant time savings during inference.
We demonstrate the applicability of our proposed method on different example models and perform detailed comparisons with state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-04T18:12:17Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.