It's a super deal -- train recurrent network on noisy data and get
smooth prediction free
- URL: http://arxiv.org/abs/2206.04215v2
- Date: Mon, 1 May 2023 15:29:24 GMT
- Title: It's a super deal -- train recurrent network on noisy data and get
smooth prediction free
- Authors: Boris Rubinstein
- Abstract summary: We examine influence of the noise component in both the training data sets and the input sequences on network prediction quality.
We propose and discuss an explanation of the observed noise compression in the predictive process.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research demonstrate that prediction of time series by predictive
recurrent neural networks based on the noisy input generates a smooth
anticipated trajectory. We examine influence of the noise component in both the
training data sets and the input sequences on network prediction quality. We
propose and discuss an explanation of the observed noise compression in the
predictive process. We also discuss importance of this property of recurrent
networks in the neuroscience context for the evolution of living organisms.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Sequential Learning from Noisy Data: Data-Assimilation Meets Echo-State
Network [0.0]
A sequential training algorithm is developed for an echo-state network (ESN) by incorporating noisy observations using an ensemble Kalman filter.
The resultant Kalman-trained echo-state network (KalT-ESN) outperforms the traditionally trained ESN with least square algorithm while still being computationally cheap.
arXiv Detail & Related papers (2023-04-01T02:03:08Z) - Delay Embedded Echo-State Network: A Predictor for Partially Observed
Systems [0.0]
A predictor for partial observations is developed using an echo-state network (ESN) and time delay embedding of the partially observed state.
The proposed method is theoretically justified with Taken's embedding theorem and strong observability of a nonlinear system.
arXiv Detail & Related papers (2022-11-11T04:13:55Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Deep Impulse Responses: Estimating and Parameterizing Filters with Deep
Networks [76.830358429947]
Impulse response estimation in high noise and in-the-wild settings is a challenging problem.
We propose a novel framework for parameterizing and estimating impulse responses based on recent advances in neural representation learning.
arXiv Detail & Related papers (2022-02-07T18:57:23Z) - Embedding Graph Convolutional Networks in Recurrent Neural Networks for
Predictive Monitoring [0.0]
This paper proposes an approach based on graph convolutional networks and recurrent neural networks.
An experimental evaluation on real-life event logs shows that our approach is more consistent and outperforms the current state-of-the-art approaches.
arXiv Detail & Related papers (2021-12-17T17:30:30Z) - Exploring the Properties and Evolution of Neural Network Eigenspaces
during Training [0.0]
We show that problem difficulty and neural network capacity affect the predictive performance in an antagonistic manner.
We show that the observed effects are independent from previously reported pathological patterns like the tail pattern''
arXiv Detail & Related papers (2021-06-17T14:18:12Z) - A Predictive Coding Account for Chaotic Itinerancy [68.8204255655161]
We show how a recurrent neural network implementing predictive coding can generate neural trajectories similar to chaotic itinerancy in the presence of input noise.
We propose two scenarios generating random and past-independent attractor switching trajectories using our model.
arXiv Detail & Related papers (2021-06-16T16:48:14Z) - Adversarial Refinement Network for Human Motion Prediction [61.50462663314644]
Two popular methods, recurrent neural networks and feed-forward deep networks, are able to predict rough motion trend.
We propose an Adversarial Refinement Network (ARNet) following a simple yet effective coarse-to-fine mechanism with novel adversarial error augmentation.
arXiv Detail & Related papers (2020-11-23T05:42:20Z) - Neural Networks with Recurrent Generative Feedback [61.90658210112138]
We instantiate this design on convolutional neural networks (CNNs)
In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.
arXiv Detail & Related papers (2020-07-17T19:32:48Z) - A fast noise filtering algorithm for time series prediction using
recurrent neural networks [0.0]
We examine the internal dynamics of RNNs and establish a set of conditions required for such behavior.
We propose a new approximate algorithm and show that it significantly speeds up the predictive process without loss of accuracy.
arXiv Detail & Related papers (2020-07-16T01:32:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.