HypoSVI: Hypocenter inversion with Stein variational inference and
Physics Informed Neural Networks
- URL: http://arxiv.org/abs/2101.03271v2
- Date: Tue, 12 Jan 2021 20:49:19 GMT
- Title: HypoSVI: Hypocenter inversion with Stein variational inference and
Physics Informed Neural Networks
- Authors: Jonathan D. Smith, Zachary E. Ross, Kamyar Azizzadenesheli, Jack B.
Muir
- Abstract summary: We introduce a scheme for Distributed Acoustic inversion with Steinal variation.
Our approach uses a differentiable forward model in the form of a neural network.
We show that the demands scale efficiently with the number of differential times.
- Score: 6.102077733475759
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a scheme for probabilistic hypocenter inversion with Stein
variational inference. Our approach uses a differentiable forward model in the
form of a physics-informed neural network, which we train to solve the Eikonal
equation. This allows for rapid approximation of the posterior by iteratively
optimizing a collection of particles against a kernelized Stein discrepancy. We
show that the method is well-equipped to handle highly non-convex posterior
distributions, which are common in hypocentral inverse problems. A suite of
experiments is performed to examine the influence of the various
hyperparameters. Once trained, the method is valid for any network geometry
within the study area without the need to build travel time tables. We show
that the computational demands scale efficiently with the number of
differential times, making it ideal for large-N sensing technologies like
Distributed Acoustic Sensing.
Related papers
- Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Solving partial differential equations with sampled neural networks [1.8590821261905535]
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering.
We discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - A DeepParticle method for learning and generating aggregation patterns
in multi-dimensional Keller-Segel chemotaxis systems [3.6184545598911724]
We study a regularized interacting particle method for computing aggregation patterns and near singular solutions of a Keller-Segal (KS) chemotaxis system in two and three space dimensions.
We further develop DeepParticle (DP) method to learn and generate solutions under variations of physical parameters.
arXiv Detail & Related papers (2022-08-31T20:52:01Z) - Probability flow solution of the Fokker-Planck equation [10.484851004093919]
We introduce an alternative scheme based on integrating an ordinary differential equation that describes the flow of probability.
Unlike the dynamics, this equation deterministically pushes samples from the initial density onto samples from the solution at any later time.
Our approach is based on recent advances in score-based diffusion for generative modeling.
arXiv Detail & Related papers (2022-06-09T17:37:09Z) - Laplace HypoPINN: Physics-Informed Neural Network for hypocenter
localization and its predictive uncertainty [0.0]
We develop a PINN-based inversion framework for hypocenter localization.
We investigate the propagation of uncertainties from the random realizations of HypoPINN's weights and biases.
arXiv Detail & Related papers (2022-05-28T13:59:32Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Neural Variational Gradient Descent [6.414093278187509]
Particle-based approximate Bayesian inference approaches such as Stein Variational Gradient Descent (SVGD) combine the flexibility and convergence guarantees of sampling methods with the computational benefits of variational inference.
We propose Neural Neural Variational Gradient Descent (NVGD), which is based on parameterizing the witness function of the Stein discrepancy by a deep neural network whose parameters are learned in parallel to the inference, mitigating the necessity to make any kernel choices whatsoever.
arXiv Detail & Related papers (2021-07-22T15:10:50Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Effective Version Space Reduction for Convolutional Neural Networks [61.84773892603885]
In active learning, sampling bias could pose a serious inconsistency problem and hinder the algorithm from finding the optimal hypothesis.
We examine active learning with convolutional neural networks through the principled lens of version space reduction.
arXiv Detail & Related papers (2020-06-22T17:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.