A Neural Network Ensemble Approach to System Identification
- URL: http://arxiv.org/abs/2110.08382v1
- Date: Fri, 15 Oct 2021 21:45:48 GMT
- Title: A Neural Network Ensemble Approach to System Identification
- Authors: Elisa Negrini, Giovanna Citti, Luca Capogna
- Abstract summary: We present a new algorithm for learning unknown governing equations from trajectory data.
We approximate the function $f$ using an ensemble of neural networks.
- Score: 0.6445605125467573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new algorithm for learning unknown governing equations from
trajectory data, using and ensemble of neural networks. Given samples of
solutions $x(t)$ to an unknown dynamical system $\dot{x}(t)=f(t,x(t))$, we
approximate the function $f$ using an ensemble of neural networks. We express
the equation in integral form and use Euler method to predict the solution at
every successive time step using at each iteration a different neural network
as a prior for $f$. This procedure yields M-1 time-independent networks, where
M is the number of time steps at which $x(t)$ is observed. Finally, we obtain a
single function $f(t,x(t))$ by neural network interpolation. Unlike our earlier
work, where we numerically computed the derivatives of data, and used them as
target in a Lipschitz regularized neural network to approximate $f$, our new
method avoids numerical differentiations, which are unstable in presence of
noise. We test the new algorithm on multiple examples both with and without
noise in the data. We empirically show that generalization and recovery of the
governing equation improve by adding a Lipschitz regularization term in our
loss function and that this method improves our previous one especially in
presence of noise, when numerical differentiation provides low quality target
data. Finally, we compare our results with the method proposed by Raissi, et
al. arXiv:1801.01236 (2018) and with SINDy.
Related papers
- Solving multiscale elliptic problems by sparse radial basis function
neural networks [3.5297361401370044]
We propose a sparse radial basis function neural network method to solve elliptic partial differential equations (PDEs) with multiscale coefficients.
Inspired by the deep mixed residual method, we rewrite the second-order problem into a first-order system and employ multiple radial basis function neural networks (RBFNNs) to approximate unknown functions in the system.
The accuracy and effectiveness of the proposed method are demonstrated through a collection of multiscale problems with scale separation, discontinuity and multiple scales from one to three dimensions.
arXiv Detail & Related papers (2023-09-01T15:11:34Z) - Generalization and Stability of Interpolating Neural Networks with
Minimal Width [37.908159361149835]
We investigate the generalization and optimization of shallow neural-networks trained by gradient in the interpolating regime.
We prove the training loss number minimizations $m=Omega(log4 (n))$ neurons and neurons $Tapprox n$.
With $m=Omega(log4 (n))$ neurons and $Tapprox n$, we bound the test loss training by $tildeO (1/)$.
arXiv Detail & Related papers (2023-02-18T05:06:15Z) - Training Overparametrized Neural Networks in Sublinear Time [14.918404733024332]
Deep learning comes at a tremendous computational and energy cost.
We present a new and a subset of binary neural networks, as a small subset of search trees, where each corresponds to a subset of search trees (Ds)
We believe this view would have further applications in analysis analysis of deep networks (Ds)
arXiv Detail & Related papers (2022-08-09T02:29:42Z) - Bounding the Width of Neural Networks via Coupled Initialization -- A
Worst Case Analysis [121.9821494461427]
We show how to significantly reduce the number of neurons required for two-layer ReLU networks.
We also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
arXiv Detail & Related papers (2022-06-26T06:51:31Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Estimating Vector Fields from Noisy Time Series [6.939768185086753]
We describe a neural network architecture consisting of tensor products of one-dimensional neural shape functions.
We find that the neural shape function architecture retains the approximation properties of dense neural networks.
We also study the combination of either our neural shape function method or existing differential equation learning methods with alternating minimization and multiple trajectories.
arXiv Detail & Related papers (2020-12-06T07:27:56Z) - System Identification Through Lipschitz Regularized Deep Neural Networks [0.4297070083645048]
We use neural networks to learn governing equations from data.
We reconstruct the right-hand side of a system of ODEs $dotx(t) = f(t, x(t))$ directly from observed uniformly time-sampled data.
arXiv Detail & Related papers (2020-09-07T17:52:51Z) - Shuffling Recurrent Neural Networks [97.72614340294547]
We propose a novel recurrent neural network model, where the hidden state $h_t$ is obtained by permuting the vector elements of the previous hidden state $h_t-1$.
In our model, the prediction is given by a second learned function, which is applied to the hidden state $s(h_t)$.
arXiv Detail & Related papers (2020-07-14T19:36:10Z) - Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK [58.5766737343951]
We consider the dynamic of descent for learning a two-layer neural network.
We show that an over-parametrized two-layer neural network can provably learn with gradient loss at most ground with Tangent samples.
arXiv Detail & Related papers (2020-07-09T07:09:28Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions [84.49087114959872]
We provide the first non-asymptotic analysis for finding stationary points of nonsmooth, nonsmooth functions.
In particular, we study Hadamard semi-differentiable functions, perhaps the largest class of nonsmooth functions.
arXiv Detail & Related papers (2020-02-10T23:23:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.