Bayesian Learning to Discover Mathematical Operations in Governing
Equations of Dynamic Systems
- URL: http://arxiv.org/abs/2206.00669v1
- Date: Wed, 1 Jun 2022 10:31:14 GMT
- Title: Bayesian Learning to Discover Mathematical Operations in Governing
Equations of Dynamic Systems
- Authors: Hongpeng Zhou, Wei Pan
- Abstract summary: This work presents a new representation for governing equations by designing the Mathematical Operation Network (MathONet) with a deep neural network-like hierarchical structure.
An MathONet is typically regarded as a super-graph with a redundant structure, a sub-graph of which can yield the governing equation.
- Score: 3.1544304017740634
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Discovering governing equations from data is critical for diverse scientific
disciplines as they can provide insights into the underlying phenomenon of
dynamic systems. This work presents a new representation for governing
equations by designing the Mathematical Operation Network (MathONet) with a
deep neural network-like hierarchical structure. Specifically, the MathONet is
stacked by several layers of unary operations (e.g., sin, cos, log) and binary
operations (e.g., +,-), respectively. An initialized MathONet is typically
regarded as a super-graph with a redundant structure, a sub-graph of which can
yield the governing equation. We develop a sparse group Bayesian learning
algorithm to extract the sub-graph by employing structurally constructed priors
over the redundant mathematical operations. By demonstrating the chaotic Lorenz
system, Lotka-Volterra system, and Kolmogorov-Petrovsky-Piskunov system, the
proposed method can discover the ordinary differential equations (ODEs) and
partial differential equations (PDEs) from the observations given limited
mathematical operations, without any prior knowledge on possible expressions of
the ODEs and PDEs.
Related papers
- A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - On Neural Differential Equations [13.503274710499971]
In particular, neural differential equations (NDEs) demonstrate that neural networks and differential equations are two sides of the same coin.
NDEs are suitable for tackling generative problems, dynamical systems, and time series.
NDEs offer high-capacity function approximation, strong priors on model space, the ability to handle irregular data, memory efficiency, and a wealth of available theory on both sides.
arXiv Detail & Related papers (2022-02-04T23:32:29Z) - Artificial neural network as a universal model of nonlinear dynamical
systems [0.0]
The map is built as an artificial neural network whose weights encode a modeled system.
We consider the Lorenz system, the Roessler system and also Hindmarch-Rose neuron.
High similarity is observed for visual images of attractors, power spectra, bifurcation diagrams and Lyapunovs exponents.
arXiv Detail & Related papers (2021-03-06T16:02:41Z) - Solving non-linear Kolmogorov equations in large dimensions by using
deep learning: a numerical comparison of discretization schemes [16.067228939231047]
Non-linear partial differential Kolmogorov equations are successfully used to describe a wide range of time dependent phenomena.
Deep learning has been introduced to solve these equations in high-dimensional regimes.
We show that, for some discretization schemes, improvements in the accuracy are possible without affecting the observed computational complexity.
arXiv Detail & Related papers (2020-12-09T07:17:26Z) - Symbolically Solving Partial Differential Equations using Deep Learning [5.1964883240501605]
We describe a neural-based method for generating exact or approximate solutions to differential equations.
Unlike other neural methods, our system returns symbolic expressions that can be interpreted directly.
arXiv Detail & Related papers (2020-11-12T22:16:03Z) - A Neuro-Symbolic Method for Solving Differential and Functional
Equations [6.899578710832262]
We introduce a method for generating symbolic expressions to solve differential equations.
Unlike existing methods, our system does not require learning a language model over symbolic mathematics.
We show how the system can be effortlessly generalized to find symbolic solutions to other mathematical tasks.
arXiv Detail & Related papers (2020-11-04T17:13:25Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.