The data-driven physical-based equations discovery using evolutionary
approach
- URL: http://arxiv.org/abs/2004.01680v1
- Date: Fri, 3 Apr 2020 17:21:57 GMT
- Title: The data-driven physical-based equations discovery using evolutionary
approach
- Authors: Alexander Hvatov and Mikhail Maslyaev
- Abstract summary: We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The modern machine learning methods allow one to obtain the data-driven
models in various ways. However, the more complex the model is, the harder it
is to interpret. In the paper, we describe the algorithm for the mathematical
equations discovery from the given observations data. The algorithm combines
genetic programming with the sparse regression.
This algorithm allows obtaining different forms of the resulting models. As
an example, it could be used for governing analytical equation discovery as
well as for partial differential equations (PDE) discovery.
The main idea is to collect a bag of the building blocks (it may be simple
functions or their derivatives of arbitrary order) and consequently take them
from the bag to create combinations, which will represent terms of the final
equation. The selected terms pass to the evolutionary algorithm, which is used
to evolve the selection. The evolutionary steps are combined with the sparse
regression to pick only the significant terms. As a result, we obtain a short
and interpretable expression that describes the physical process that lies
beyond the data.
In the paper, two examples of the algorithm application are described: the
PDE discovery for the metocean processes and the function discovery for the
acoustics.
Related papers
- Comparison of Single- and Multi- Objective Optimization Quality for
Evolutionary Equation Discovery [77.34726150561087]
Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions.
The proposed comparison approach is shown on classical model examples -- Burgers equation, wave equation, and Korteweg - de Vries equation.
arXiv Detail & Related papers (2023-06-29T15:37:19Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - D-CIPHER: Discovery of Closed-form Partial Differential Equations [80.46395274587098]
We propose D-CIPHER, which is robust to measurement artifacts and can uncover a new and very general class of differential equations.
We further design a novel optimization procedure, CoLLie, to help D-CIPHER search through this class efficiently.
arXiv Detail & Related papers (2022-06-21T17:59:20Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Feature Engineering with Regularity Structures [4.082216579462797]
We investigate the use of models from the theory of regularity structures as features in machine learning tasks.
We provide a flexible definition of a model feature vector associated to a space-time signal, along with two algorithms which illustrate ways in which these features can be combined with linear regression.
We apply these algorithms in several numerical experiments designed to learn solutions to PDEs with a given forcing and boundary data.
arXiv Detail & Related papers (2021-08-12T17:53:47Z) - Discovery of Nonlinear Dynamical Systems using a Runge-Kutta Inspired
Dictionary-based Sparse Regression Approach [9.36739413306697]
We blend machine learning and dictionary-based learning with numerical analysis tools to discover governing differential equations.
We obtain interpretable and parsimonious models which are prone to generalize better beyond the sampling regime.
We discuss its extension to governing equations, containing rational nonlinearities that typically appear in biological networks.
arXiv Detail & Related papers (2021-05-11T08:46:51Z) - Shape-constrained Symbolic Regression -- Improving Extrapolation with
Prior Knowledge [0.0]
The aim is to find models which conform to expected behaviour and which have improved capabilities.
The algorithms are tested on a set of 19 synthetic and four real-world regression problems.
Shape-constrained regression produces the best results for the test set but also significantly larger models.
arXiv Detail & Related papers (2021-03-29T14:04:18Z) - A Probabilistic State Space Model for Joint Inference from Differential
Equations and Data [23.449725313605835]
We show a new class of solvers for ordinary differential equations (ODEs) that phrase the solution process directly in terms of Bayesian filtering.
It then becomes possible to perform approximate Bayesian inference on the latent force as well as the ODE solution in a single, linear complexity pass of an extended Kalman filter.
We demonstrate the expressiveness and performance of the algorithm by training a non-parametric SIRD model on data from the COVID-19 outbreak.
arXiv Detail & Related papers (2021-03-18T10:36:09Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z) - Data-Driven Discovery of Coarse-Grained Equations [0.0]
Multiscale modeling and simulations are two areas where learning on simulated data can lead to such discovery.
We replace the human discovery of such models with a machine-learning strategy based on sparse regression that can be executed in two modes.
A series of examples demonstrates the accuracy, robustness, and limitations of our approach to equation discovery.
arXiv Detail & Related papers (2020-01-30T23:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.