Estimating Vector Fields from Noisy Time Series
- URL: http://arxiv.org/abs/2012.03199v1
- Date: Sun, 6 Dec 2020 07:27:56 GMT
- Title: Estimating Vector Fields from Noisy Time Series
- Authors: Harish S. Bhat, Majerle Reeves, Ramin Raziperchikolaei
- Abstract summary: We describe a neural network architecture consisting of tensor products of one-dimensional neural shape functions.
We find that the neural shape function architecture retains the approximation properties of dense neural networks.
We also study the combination of either our neural shape function method or existing differential equation learning methods with alternating minimization and multiple trajectories.
- Score: 6.939768185086753
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While there has been a surge of recent interest in learning differential
equation models from time series, methods in this area typically cannot cope
with highly noisy data. We break this problem into two parts: (i) approximating
the unknown vector field (or right-hand side) of the differential equation, and
(ii) dealing with noise. To deal with (i), we describe a neural network
architecture consisting of tensor products of one-dimensional neural shape
functions. For (ii), we propose an alternating minimization scheme that
switches between vector field training and filtering steps, together with
multiple trajectories of training data. We find that the neural shape function
architecture retains the approximation properties of dense neural networks,
enables effective computation of vector field error, and allows for graphical
interpretability, all for data/systems in any finite dimension $d$. We also
study the combination of either our neural shape function method or existing
differential equation learning methods with alternating minimization and
multiple trajectories. We find that retrofitting any learning method in this
way boosts the method's robustness to noise. While in their raw form the
methods struggle with 1% Gaussian noise, after retrofitting, they learn
accurate vector fields from data with 10% Gaussian noise.
Related papers
- Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Embedding stochastic differential equations into neural networks via
dual processes [0.0]
We propose a new approach to constructing a neural network for predicting expectations of differential equations.
The proposed method does not need data sets of inputs and outputs.
As a demonstration, we construct neural networks for the Ornstein-Uhlenbeck process and the noisy van der Pol system.
arXiv Detail & Related papers (2023-06-08T00:50:16Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Neural ODEs with Irregular and Noisy Data [8.349349605334316]
We discuss a methodology to learn differential equation(s) using noisy and irregular sampled measurements.
In our methodology, the main innovation can be seen in the integration of deep neural networks with the neural ordinary differential equations (ODEs) approach.
The proposed framework to learn a model describing the vector field is highly effective under noisy measurements.
arXiv Detail & Related papers (2022-05-19T11:24:41Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - A Neural Network Ensemble Approach to System Identification [0.6445605125467573]
We present a new algorithm for learning unknown governing equations from trajectory data.
We approximate the function $f$ using an ensemble of neural networks.
arXiv Detail & Related papers (2021-10-15T21:45:48Z) - Learning Dynamics from Noisy Measurements using Deep Learning with a
Runge-Kutta Constraint [9.36739413306697]
We discuss a methodology to learn differential equation(s) using noisy and sparsely sampled measurements.
In our methodology, the main innovation can be seen in of integration of deep neural networks with a classical numerical integration method.
arXiv Detail & Related papers (2021-09-23T15:43:45Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.