Laplace HypoPINN: Physics-Informed Neural Network for hypocenter
localization and its predictive uncertainty
- URL: http://arxiv.org/abs/2205.14439v1
- Date: Sat, 28 May 2022 13:59:32 GMT
- Title: Laplace HypoPINN: Physics-Informed Neural Network for hypocenter
localization and its predictive uncertainty
- Authors: Muhammad Izzatullah, Isa Eren Yildirim, Umair Bin Waheed, Tariq
Alkhalifah
- Abstract summary: We develop a PINN-based inversion framework for hypocenter localization.
We investigate the propagation of uncertainties from the random realizations of HypoPINN's weights and biases.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Several techniques have been proposed over the years for automatic hypocenter
localization. While those techniques have pros and cons that trade-off
computational efficiency and the susceptibility of getting trapped in local
minima, an alternate approach is needed that allows robust localization
performance and holds the potential to make the elusive goal of real-time
microseismic monitoring possible. Physics-informed neural networks (PINNs) have
appeared on the scene as a flexible and versatile framework for solving partial
differential equations (PDEs) along with the associated initial or boundary
conditions. We develop HypoPINN -- a PINN-based inversion framework for
hypocenter localization and introduce an approximate Bayesian framework for
estimating its predictive uncertainties. This work focuses on predicting the
hypocenter locations using HypoPINN and investigates the propagation of
uncertainties from the random realizations of HypoPINN's weights and biases
using the Laplace approximation. We train HypoPINN to obtain the optimized
weights for predicting hypocenter location. Next, we approximate the covariance
matrix at the optimized HypoPINN's weights for posterior sampling with the
Laplace approximation. The posterior samples represent various realizations of
HypoPINN's weights. Finally, we predict the locations of the hypocenter
associated with those weights' realizations to investigate the uncertainty
propagation that comes from those realisations. We demonstrate the features of
this methodology through several numerical examples, including using the Otway
velocity model based on the Otway project in Australia.
Related papers
- Hessian-Free Laplace in Bayesian Deep Learning [44.16006844888796]
Hessian-free Laplace (HFL) approximation uses curvature of both the log posterior and network prediction to estimate its variance.
We show that, under standard assumptions of LA in Bayesian deep learning, HFL targets the same variance as LA, and can be efficiently amortized in a pre-trained network.
arXiv Detail & Related papers (2024-03-15T20:47:39Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Bayesian Reasoning for Physics Informed Neural Networks [0.0]
We present the application of the physics-informed neural network (PINN) approach in Bayesian formulation.
For each model or fit, the evidence is computed, which is a measure that classifies the hypothesis.
We have shown that within the Bayesian framework, one can obtain the relative weights between the boundary and equation contributions to the total loss.
arXiv Detail & Related papers (2023-08-25T07:38:50Z) - Neural Importance Sampling for Rapid and Reliable Gravitational-Wave
Inference [59.040209568168436]
We first generate a rapid proposal for the Bayesian posterior using neural networks, and then attach importance weights based on the underlying likelihood and prior.
This provides (1) a corrected posterior free from network inaccuracies, (2) a performance diagnostic (the sample efficiency) for assessing the proposal and identifying failure cases, and (3) an unbiased estimate of the Bayesian evidence.
We carry out a large study analyzing 42 binary black hole mergers observed by LIGO and Virgo with the SEOBNRv4PHM and IMRPhenomHMXP waveform models.
arXiv Detail & Related papers (2022-10-11T18:00:02Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Quantifying Model Predictive Uncertainty with Perturbation Theory [21.591460685054546]
We propose a framework for predictive uncertainty quantification of a neural network.
We use perturbation theory from quantum physics to formulate a moment decomposition problem.
Our approach provides fast model predictive uncertainty estimates with much greater precision and calibration.
arXiv Detail & Related papers (2021-09-22T17:55:09Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - A Kernel Framework to Quantify a Model's Local Predictive Uncertainty
under Data Distributional Shifts [21.591460685054546]
Internal layer outputs of a trained neural network contain all of the information related to both its mapping function and its input data distribution.
We propose a framework for predictive uncertainty quantification of a trained neural network that explicitly estimates the PDF of its raw prediction space.
The kernel framework is observed to provide model uncertainty estimates with much greater precision based on the ability to detect model prediction errors.
arXiv Detail & Related papers (2021-03-02T00:31:53Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Continuous Wasserstein-2 Barycenter Estimation without Minimax
Optimization [94.18714844247766]
Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport.
We present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures.
arXiv Detail & Related papers (2021-02-02T21:01:13Z) - HypoSVI: Hypocenter inversion with Stein variational inference and
Physics Informed Neural Networks [6.102077733475759]
We introduce a scheme for Distributed Acoustic inversion with Steinal variation.
Our approach uses a differentiable forward model in the form of a neural network.
We show that the demands scale efficiently with the number of differential times.
arXiv Detail & Related papers (2021-01-09T01:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.