Selective Memory Recursive Least Squares: Recast Forgetting into Memory
in RBF Neural Network Based Real-Time Learning
- URL: http://arxiv.org/abs/2211.07909v2
- Date: Tue, 8 Aug 2023 12:30:03 GMT
- Title: Selective Memory Recursive Least Squares: Recast Forgetting into Memory
in RBF Neural Network Based Real-Time Learning
- Authors: Yiming Fei, Jiangang Li, Yanan Li
- Abstract summary: In radial basis function neural network (RBFNN) based real-time learning tasks, forgetting mechanisms are widely used.
This paper proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the classical forgetting mechanisms are recast into a memory mechanism.
With SMRLS, the input space of the RBFNN is evenly divided into a finite number of partitions and a synthesized objective function is developed using synthesized samples from each partition.
- Score: 2.31120983784623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In radial basis function neural network (RBFNN) based real-time learning
tasks, forgetting mechanisms are widely used such that the neural network can
keep its sensitivity to new data. However, with forgetting mechanisms, some
useful knowledge will get lost simply because they are learned a long time ago,
which we refer to as the passive knowledge forgetting phenomenon. To address
this problem, this paper proposes a real-time training method named selective
memory recursive least squares (SMRLS) in which the classical forgetting
mechanisms are recast into a memory mechanism. Different from the forgetting
mechanism, which mainly evaluates the importance of samples according to the
time when samples are collected, the memory mechanism evaluates the importance
of samples through both temporal and spatial distribution of samples. With
SMRLS, the input space of the RBFNN is evenly divided into a finite number of
partitions and a synthesized objective function is developed using synthesized
samples from each partition. In addition to the current approximation error,
the neural network also updates its weights according to the recorded data from
the partition being visited. Compared with classical training methods including
the forgetting factor recursive least squares (FFRLS) and stochastic gradient
descent (SGD) methods, SMRLS achieves improved learning speed and
generalization capability, which are demonstrated by corresponding simulation
results.
Related papers
- Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Iterative self-transfer learning: A general methodology for response
time-history prediction based on small dataset [0.0]
An iterative self-transfer learningmethod for training neural networks based on small datasets is proposed in this study.
The results show that the proposed method can improve the model performance by near an order of magnitude on small datasets.
arXiv Detail & Related papers (2023-06-14T18:48:04Z) - Learning Signal Temporal Logic through Neural Network for Interpretable
Classification [13.829082181692872]
We propose an explainable neural-symbolic framework for the classification of time-series behaviors.
We demonstrate the computational efficiency, compactness, and interpretability of the proposed method through driving scenarios and naval surveillance case studies.
arXiv Detail & Related papers (2022-10-04T21:11:54Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Path classification by stochastic linear recurrent neural networks [2.5499055723658097]
We show that RNNs retain a partial signature of the paths they are fed as the unique information exploited for training and classification tasks.
We argue that these RNNs are easy to train and robust and back these observations with numerical experiments on both synthetic and real data.
arXiv Detail & Related papers (2021-08-06T12:59:12Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - HiPPO: Recurrent Memory with Optimal Polynomial Projections [93.3537706398653]
We introduce a general framework (HiPPO) for the online compression of continuous signals and discrete time series by projection onto bases.
Given a measure that specifies the importance of each time step in the past, HiPPO produces an optimal solution to a natural online function approximation problem.
This formal framework yields a new memory update mechanism (HiPPO-LegS) that scales through time to remember all history, avoiding priors on the timescale.
arXiv Detail & Related papers (2020-08-17T23:39:33Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Automated Deep Abstractions for Stochastic Chemical Reaction Networks [0.0]
Low-level chemical reaction network (CRN) models give raise to a highly-dimensional continuous-time Markov chain (CTMC)
A recently proposed abstraction method uses deep learning to replace this CTMC with a discrete-time continuous-space process.
In this paper, we propose to further automatise deep abstractions for CRNs, through learning the optimal neural network architecture.
arXiv Detail & Related papers (2020-01-30T13:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.