Sequential Learning from Noisy Data: Data-Assimilation Meets Echo-State
Network
- URL: http://arxiv.org/abs/2304.00198v1
- Date: Sat, 1 Apr 2023 02:03:08 GMT
- Title: Sequential Learning from Noisy Data: Data-Assimilation Meets Echo-State
Network
- Authors: Debdipta Goswami
- Abstract summary: A sequential training algorithm is developed for an echo-state network (ESN) by incorporating noisy observations using an ensemble Kalman filter.
The resultant Kalman-trained echo-state network (KalT-ESN) outperforms the traditionally trained ESN with least square algorithm while still being computationally cheap.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores the problem of training a recurrent neural network from
noisy data. While neural network based dynamic predictors perform well with
noise-free training data, prediction with noisy inputs during training phase
poses a significant challenge. Here a sequential training algorithm is
developed for an echo-state network (ESN) by incorporating noisy observations
using an ensemble Kalman filter. The resultant Kalman-trained echo-state
network (KalT-ESN) outperforms the traditionally trained ESN with least square
algorithm while still being computationally cheap. The proposed method is
demonstrated on noisy observations from three systems: two synthetic datasets
from chaotic dynamical systems and a set of real-time traffic data.
Related papers
- Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Training neural networks with structured noise improves classification and generalization [0.0]
We show how adding structure to noisy training data can substantially improve the algorithm performance.
We also prove that the so-called Hebbian Unlearning rule coincides with the training-with-noise algorithm when noise is maximal.
arXiv Detail & Related papers (2023-02-26T22:10:23Z) - Online Real-time Learning of Dynamical Systems from Noisy Streaming
Data: A Koopman Operator Approach [0.0]
We present a novel algorithm for online real-time learning of dynamical systems from noisy time-series data.
The proposed algorithm employs the Robust Koopman operator framework to mitigate the effect of measurement noise.
arXiv Detail & Related papers (2022-12-10T10:21:45Z) - Delay Embedded Echo-State Network: A Predictor for Partially Observed
Systems [0.0]
A predictor for partial observations is developed using an echo-state network (ESN) and time delay embedding of the partially observed state.
The proposed method is theoretically justified with Taken's embedding theorem and strong observability of a nonlinear system.
arXiv Detail & Related papers (2022-11-11T04:13:55Z) - Noise Injection as a Probe of Deep Learning Dynamics [0.0]
We propose a new method to probe the learning mechanism of Deep Neural Networks (DNN) by perturbing the system using Noise Injection Nodes (NINs)
We find that the system displays distinct phases during training, dictated by the scale of injected noise.
In some cases, the evolution of the noise nodes is similar to that of the unperturbed loss, thus indicating the possibility of using NINs to learn more about the full system in the future.
arXiv Detail & Related papers (2022-10-24T20:51:59Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Inferring, Predicting, and Denoising Causal Wave Dynamics [3.9407250051441403]
The DISTributed Artificial neural Network Architecture (DISTANA) is a generative, recurrent graph convolution neural network.
We show that DISTANA is very well-suited to denoise data streams, given that re-occurring patterns are observed.
It produces stable and accurate closed-loop predictions even over hundreds of time steps.
arXiv Detail & Related papers (2020-09-19T08:33:53Z) - Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by
Spiking Neural Network [68.43026108936029]
We propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment.
We implement this algorithm in a real-time robotic system with a microphone array.
The experiment results show a mean error azimuth of 13 degrees, which surpasses the accuracy of the other biologically plausible neuromorphic approach for sound source localization.
arXiv Detail & Related papers (2020-07-07T08:22:56Z) - Applications of Koopman Mode Analysis to Neural Networks [52.77024349608834]
We consider the training process of a neural network as a dynamical system acting on the high-dimensional weight space.
We show how the Koopman spectrum can be used to determine the number of layers required for the architecture.
We also show how using Koopman modes we can selectively prune the network to speed up the training procedure.
arXiv Detail & Related papers (2020-06-21T11:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.