Model discovery in the sparse sampling regime
- URL: http://arxiv.org/abs/2105.00400v1
- Date: Sun, 2 May 2021 06:27:05 GMT
- Title: Model discovery in the sparse sampling regime
- Authors: Gert-Jan Both, Georges Tod, Remy Kusters
- Abstract summary: We show how deep learning can improve model discovery of partial differential equations.
As a result, deep learning-based model discovery allows to recover the underlying equations.
We illustrate our claims on both synthetic and experimental sets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To improve the physical understanding and the predictions of complex dynamic
systems, such as ocean dynamics and weather predictions, it is of paramount
interest to identify interpretable models from coarsely and off-grid sampled
observations. In this work, we investigate how deep learning can improve model
discovery of partial differential equations when the spacing between sensors is
large and the samples are not placed on a grid. We show how leveraging physics
informed neural network interpolation and automatic differentiation, allow to
better fit the data and its spatiotemporal derivatives, compared to more
classic spline interpolation and numerical differentiation techniques. As a
result, deep learning-based model discovery allows to recover the underlying
equations, even when sensors are placed further apart than the data's
characteristic length scale and in the presence of high noise levels. We
illustrate our claims on both synthetic and experimental data sets where
combinations of physical processes such as (non)-linear advection, reaction,
and diffusion are correctly identified.
Related papers
- Modeling Randomly Observed Spatiotemporal Dynamical Systems [7.381752536547389]
Currently available neural network-based modeling approaches fall short when faced with data collected randomly over time and space.
In response, we developed a new method that effectively handles such randomly sampled data.
Our model integrates techniques from amortized variational inference, neural differential equations, neural point processes, and implicit neural representations to predict both the dynamics of the system and the timings and locations of future observations.
arXiv Detail & Related papers (2024-06-01T09:03:32Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Generating artificial digital image correlation data using
physics-guided adversarial networks [2.07180164747172]
Digital image correlation (DIC) has become a valuable tool to monitor and evaluate mechanical experiments of cracked specimen.
We present a method to directly generate large amounts of artificial displacement data of cracked specimen resembling real interpolated DIC displacements.
arXiv Detail & Related papers (2023-03-28T12:52:40Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Discovery of Nonlinear Dynamical Systems using a Runge-Kutta Inspired
Dictionary-based Sparse Regression Approach [9.36739413306697]
We blend machine learning and dictionary-based learning with numerical analysis tools to discover governing differential equations.
We obtain interpretable and parsimonious models which are prone to generalize better beyond the sampling regime.
We discuss its extension to governing equations, containing rational nonlinearities that typically appear in biological networks.
arXiv Detail & Related papers (2021-05-11T08:46:51Z) - Data-Driven Discovery of Coarse-Grained Equations [0.0]
Multiscale modeling and simulations are two areas where learning on simulated data can lead to such discovery.
We replace the human discovery of such models with a machine-learning strategy based on sparse regression that can be executed in two modes.
A series of examples demonstrates the accuracy, robustness, and limitations of our approach to equation discovery.
arXiv Detail & Related papers (2020-01-30T23:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.