Neural Conservation Laws: A Divergence-Free Perspective
- URL: http://arxiv.org/abs/2210.01741v1
- Date: Tue, 4 Oct 2022 17:01:53 GMT
- Title: Neural Conservation Laws: A Divergence-Free Perspective
- Authors: Jack Richter-Powell, Yaron Lipman, Ricky T. Q. Chen
- Abstract summary: We propose building divergence-free neural networks through the concept of differential forms.
We prove these models are universal and so can be used to represent any divergence-free vector field.
- Score: 36.668126758052814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the parameterization of deep neural networks that by design
satisfy the continuity equation, a fundamental conservation law. This is
enabled by the observation that solutions of the continuity equation can be
represented as a divergence-free vector field. We hence propose building
divergence-free neural networks through the concept of differential forms, and
with the aid of automatic differentiation, realize two practical constructions.
As a result, we can parameterize pairs of densities and vector fields that
always satisfy the continuity equation by construction, foregoing the need for
extra penalty methods or expensive numerical simulation. Furthermore, we prove
these models are universal and so can be used to represent any divergence-free
vector field. Finally, we experimentally validate our approaches on neural
network-based solutions to fluid equations, solving for the Hodge
decomposition, and learning dynamical optimal transport maps the Hodge
decomposition, and learning dynamical optimal transport maps.
Related papers
- A Mathematical Analysis of Neural Operator Behaviors [0.0]
This paper presents a rigorous framework for analyzing the behaviors of neural operators.
We focus on their stability, convergence, clustering dynamics, universality, and generalization error.
We aim to offer clear and unified guidance in a single setting for the future design of neural operator-based methods.
arXiv Detail & Related papers (2024-10-28T19:38:53Z) - Finite Operator Learning: Bridging Neural Operators and Numerical Methods for Efficient Parametric Solution and Optimization of PDEs [0.0]
We introduce a method that combines neural operators, physics-informed machine learning, and standard numerical methods for solving PDEs.
We can parametrically solve partial differential equations in a data-free manner and provide accurate sensitivities.
Our study focuses on the steady-state heat equation within heterogeneous materials.
arXiv Detail & Related papers (2024-07-04T21:23:12Z) - D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory [79.50644650795012]
We propose a deep learning approach to solve Kohn-Sham Density Functional Theory (KS-DFT)
We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity.
In addition, we show that our approach enables us to explore more complex neural-based wave functions.
arXiv Detail & Related papers (2023-03-01T10:38:10Z) - Learning Functional Transduction [9.926231893220063]
We show that transductive regression principles can be meta-learned through gradient descent to form efficient in-context neural approximators.
We demonstrate the benefit of our meta-learned transductive approach to model complex physical systems influenced by varying external factors with little data.
arXiv Detail & Related papers (2023-02-01T09:14:28Z) - A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
Neural Networks [49.870593940818715]
We study the infinite-width limit of a type of three-layer NN model whose first layer is random and fixed.
Our theory accommodates different scaling choices of the model, resulting in two regimes of the MF limit that demonstrate distinctive behaviors.
arXiv Detail & Related papers (2022-10-28T17:26:27Z) - Uncertainty Quantification for Transport in Porous media using
Parameterized Physics Informed neural Networks [0.0]
We present a Parametrization of the Informed Neural Network (P-PINN) approach to tackle the problem of uncertainty quantification in reservoir engineering problems.
We demonstrate the approach with the immiscible two phase flow displacement (Buckley-Leverett problem) in heterogeneous porous medium.
We show that provided a proper parameterization of the uncertainty space, PINN can produce solutions that match closely both the ensemble realizations and the moments.
arXiv Detail & Related papers (2022-05-19T06:23:23Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Phase space learning with neural networks [0.0]
This work proposes an autoencoder neural network as a non-linear generalization of projection-based methods for solving Partial Differential Equations (PDEs)
The proposed deep learning architecture is capable of generating the dynamics of PDEs by integrating them completely in a very reduced latent space without intermediate reconstructions, to then decode the latent solution back to the original space.
It is shown the reliability of properly regularized neural networks to learn the global characteristics of a dynamical system's phase space from the sample data of a single path, as well as its ability to predict unseen bifurcations.
arXiv Detail & Related papers (2020-06-22T20:28:07Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.