Reliability Analysis of Complex Systems using Subset Simulations with
Hamiltonian Neural Networks
- URL: http://arxiv.org/abs/2401.05244v1
- Date: Wed, 10 Jan 2024 16:15:42 GMT
- Title: Reliability Analysis of Complex Systems using Subset Simulations with
Hamiltonian Neural Networks
- Authors: Denny Thaler, Somayajulu L. N. Dhulipala, Franz Bamer, Bernd Markert,
Michael D. Shields
- Abstract summary: We present a new Subset Simulation approach using Hamiltonian neural network-based Monte Carlo sampling for reliability analysis.
The proposed strategy combines the superior sampling of the Hamiltonian Monte Carlo method with computationally efficient gradient evaluations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a new Subset Simulation approach using Hamiltonian neural
network-based Monte Carlo sampling for reliability analysis. The proposed
strategy combines the superior sampling of the Hamiltonian Monte Carlo method
with computationally efficient gradient evaluations using Hamiltonian neural
networks. This combination is especially advantageous because the neural
network architecture conserves the Hamiltonian, which defines the acceptance
criteria of the Hamiltonian Monte Carlo sampler. Hence, this strategy achieves
high acceptance rates at low computational cost. Our approach estimates small
failure probabilities using Subset Simulations. However, in low-probability
sample regions, the gradient evaluation is particularly challenging. The
remarkable accuracy of the proposed strategy is demonstrated on different
reliability problems, and its efficiency is compared to the traditional
Hamiltonian Monte Carlo method. We note that this approach can reach its
limitations for gradient estimations in low-probability regions of complex and
high-dimensional distributions. Thus, we propose techniques to improve gradient
prediction in these particular situations and enable accurate estimations of
the probability of failure. The highlight of this study is the reliability
analysis of a system whose parameter distributions must be inferred with
Bayesian inference problems. In such a case, the Hamiltonian Monte Carlo method
requires a full model evaluation for each gradient evaluation and, therefore,
comes at a very high cost. However, using Hamiltonian neural networks in this
framework replaces the expensive model evaluation, resulting in tremendous
improvements in computational efficiency.
Related papers
- Non-asymptotic convergence analysis of the stochastic gradient
Hamiltonian Monte Carlo algorithm with discontinuous stochastic gradient with
applications to training of ReLU neural networks [8.058385158111207]
We provide a non-asymptotic analysis of the convergence of the gradient Hamiltonian Monte Carlo to a target measure in Wasserstein-1 and Wasserstein-2 distance.
To illustrate our main results, we consider numerical experiments on quantile estimation and on several problems involving ReLU neural networks relevant in finance and artificial intelligence.
arXiv Detail & Related papers (2024-09-25T17:21:09Z) - Neural Network-Based Score Estimation in Diffusion Models: Optimization
and Generalization [12.812942188697326]
Diffusion models have emerged as a powerful tool rivaling GANs in generating high-quality samples with improved fidelity, flexibility, and robustness.
A key component of these models is to learn the score function through score matching.
Despite empirical success on various tasks, it remains unclear whether gradient-based algorithms can learn the score function with a provable accuracy.
arXiv Detail & Related papers (2024-01-28T08:13:56Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - High Accuracy Uncertainty-Aware Interatomic Force Modeling with
Equivariant Bayesian Neural Networks [3.028098724882708]
We introduce a new Monte Carlo Markov chain sampling algorithm for learning interatomic forces.
In addition, we introduce a new neural network model based on the NequIP architecture and demonstrate that, when combined with our novel sampling algorithm, we obtain predictions with state-of-the-art accuracy as well as a good measure of uncertainty.
arXiv Detail & Related papers (2023-04-05T10:39:38Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Calibration and Uncertainty Quantification of Bayesian Convolutional
Neural Networks for Geophysical Applications [0.0]
It is common to incorporate the uncertainty of predictions such subsurface models should provide calibrated probabilities and the associated uncertainties in their predictions.
It has been shown that popular Deep Learning-based models are often miscalibrated, and due to their deterministic nature, provide no means to interpret the uncertainty of their predictions.
We compare three different approaches obtaining probabilistic models based on convolutional neural networks in a Bayesian formalism.
arXiv Detail & Related papers (2021-05-25T17:54:23Z) - Variance based sensitivity analysis for Monte Carlo and importance
sampling reliability assessment with Gaussian processes [0.0]
We propose a methodology to quantify the sensitivity of the probability of failure estimator to two uncertainty sources.
This analysis also enables to control the whole error associated to the failure probability estimate and thus provides an accuracy criterion on the estimation.
The approach is proposed for both a Monte Carlo based method as well as an importance sampling based method, seeking to improve the estimation of rare event probabilities.
arXiv Detail & Related papers (2020-11-30T17:06:28Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Sinkhorn Natural Gradient for Generative Models [125.89871274202439]
We propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
We show that the Sinkhorn information matrix (SIM), a key component of SiNG, has an explicit expression and can be evaluated accurately in complexity that scales logarithmically.
In our experiments, we quantitatively compare SiNG with state-of-the-art SGD-type solvers on generative tasks to demonstrate its efficiency and efficacy of our method.
arXiv Detail & Related papers (2020-11-09T02:51:17Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.