EikoNet: Solving the Eikonal equation with Deep Neural Networks
- URL: http://arxiv.org/abs/2004.00361v3
- Date: Tue, 11 Aug 2020 15:43:51 GMT
- Title: EikoNet: Solving the Eikonal equation with Deep Neural Networks
- Authors: Jonathan D. Smith, Kamyar Azizzadenesheli and Zachary E. Ross
- Abstract summary: We propose EikoNet, a deep learning approach to solving the Eikonal equation.
Our grid-free approach allows for rapid determination of the travel time between any two points within a continuous 3D domain.
The developed approach has important applications to earthquake hypocenter inversion, ray multi-pathing, and tomographic modeling.
- Score: 6.735657356113614
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent deep learning revolution has created an enormous opportunity for
accelerating compute capabilities in the context of physics-based simulations.
Here, we propose EikoNet, a deep learning approach to solving the Eikonal
equation, which characterizes the first-arrival-time field in heterogeneous 3D
velocity structures. Our grid-free approach allows for rapid determination of
the travel time between any two points within a continuous 3D domain. These
travel time solutions are allowed to violate the differential equation - which
casts the problem as one of optimization - with the goal of finding network
parameters that minimize the degree to which the equation is violated. In doing
so, the method exploits the differentiability of neural networks to calculate
the spatial gradients analytically, meaning the network can be trained on its
own without ever needing solutions from a finite difference algorithm. EikoNet
is rigorously tested on several velocity models and sampling methods to
demonstrate robustness and versatility. Training and inference are highly
parallelized, making the approach well-suited for GPUs. EikoNet has low memory
overhead, and further avoids the need for travel-time lookup tables. The
developed approach has important applications to earthquake hypocenter
inversion, ray multi-pathing, and tomographic modeling, as well as to other
fields beyond seismology where ray tracing is essential.
Related papers
- Learning Efficient Surrogate Dynamic Models with Graph Spline Networks [28.018442945654364]
We present GraphSplineNets, a novel deep-learning method to speed up the forecasting of physical systems.
Our method uses two differentiable spline collocation methods to efficiently predict response at any location in time and space.
arXiv Detail & Related papers (2023-10-25T06:32:47Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Learning Controllable Adaptive Simulation for Multi-resolution Physics [86.8993558124143]
We introduce Learning controllable Adaptive simulation for Multi-resolution Physics (LAMP) as the first full deep learning-based surrogate model.
LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNN-based actor-critic for learning the policy of spatial refinement and coarsening.
We demonstrate that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error.
arXiv Detail & Related papers (2023-05-01T23:20:27Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Physics-informed Deep Super-resolution for Spatiotemporal Data [18.688475686901082]
Deep learning can be used to augment scientific data based on coarse-grained simulations.
We propose a rich and efficient temporal super-resolution framework inspired by physics-informed learning.
Results demonstrate the superior effectiveness and efficiency of the proposed method compared with baseline algorithms.
arXiv Detail & Related papers (2022-08-02T13:57:35Z) - Sampling-free Inference for Ab-Initio Potential Energy Surface Networks [2.088583843514496]
A potential energy surface network (PESNet) has been proposed to reduce training time by solving the Schr"odinger equation for many geometries simultaneously.
Here, we address the inference shortcomings by proposing the Potential learning from ab-initio Networks (PlaNet) framework to simultaneously train a surrogate model that avoids expensive Monte-Carlo integration.
In this way, we can accurately model high-resolution multi-dimensional energy surfaces that previously would have been unobtainable via neural wave functions.
arXiv Detail & Related papers (2022-05-30T10:00:59Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - DeepPhysics: a physics aware deep learning framework for real-time
simulation [0.0]
We propose a solution to simulate hyper-elastic materials using a data-driven approach.
A neural network is trained to learn the non-linear relationship between boundary conditions and the resulting displacement field.
The results show that our network architecture trained with a limited amount of data can predict the displacement field in less than a millisecond.
arXiv Detail & Related papers (2021-09-17T12:15:47Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.