Real-Time Progressive Learning: Accumulate Knowledge from Control with
Neural-Network-Based Selective Memory
- URL: http://arxiv.org/abs/2308.04223v2
- Date: Fri, 24 Nov 2023 05:43:36 GMT
- Title: Real-Time Progressive Learning: Accumulate Knowledge from Control with
Neural-Network-Based Selective Memory
- Authors: Yiming Fei, Jiangang Li, Yanan Li
- Abstract summary: A radial basis function neural network based learning control scheme named real-time progressive learning (RTPL) is proposed.
RTPL learns unknown dynamics of the system with guaranteed stability and closed-loop performance.
- Score: 2.8638167607890836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Memory, as the basis of learning, determines the storage, update and
forgetting of knowledge and further determines the efficiency of learning.
Featured with the mechanism of memory, a radial basis function neural network
based learning control scheme named real-time progressive learning (RTPL) is
proposed to learn the unknown dynamics of the system with guaranteed stability
and closed-loop performance. Instead of the Lyapunov-based weight update law of
conventional neural network learning control (NNLC), which mainly concentrates
on stability and control performance, RTPL employs the selective memory
recursive least squares (SMRLS) algorithm to update the weights of the neural
network and achieves the following merits: 1) improved learning speed without
filtering, 2) robustness to hyperparameter setting of neural networks, 3) good
generalization ability, i.e., reuse of learned knowledge in different tasks,
and 4) guaranteed learning performance under parameter perturbation. Moreover,
RTPL realizes continuous accumulation of knowledge as a result of its
reasonably allocated memory while NNLC may gradually forget knowledge that it
has learned. Corresponding theoretical analysis and simulation studies
demonstrate the effectiveness of RTPL.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron [3.069335774032178]
We use a dataset-process approach to derive flow equations describing learning.
We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve.
This approach points a way toward analyzing learning dynamics for more-complex circuit architectures.
arXiv Detail & Related papers (2024-09-05T17:58:28Z) - Real-Time Recurrent Reinforcement Learning [7.737685867200335]
RTRRL consists of three parts: (1) a Meta-RL RNN architecture, implementing on its own an actor-critic algorithm; (2) an outer reinforcement learning algorithm, exploiting temporal difference learning and dutch eligibility traces to train the Meta-RL network; and (3) random-feedback local-online (RFLO) learning, an online automatic differentiation algorithm for computing the gradients with respect to parameters of the network.
arXiv Detail & Related papers (2023-11-08T16:56:16Z) - Improving Performance in Continual Learning Tasks using Bio-Inspired
Architectures [4.2903672492917755]
We develop a biologically inspired lightweight neural network architecture that incorporates synaptic plasticity mechanisms and neuromodulation.
Our approach leads to superior online continual learning performance on Split-MNIST, Split-CIFAR-10, and Split-CIFAR-100 datasets.
We further demonstrate the effectiveness of our approach by integrating key design concepts into other backpropagation-based continual learning algorithms.
arXiv Detail & Related papers (2023-08-08T19:12:52Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - The least-control principle for learning at equilibrium [65.2998274413952]
We present a new principle for learning equilibrium recurrent neural networks, deep equilibrium models, or meta-learning.
Our results shed light on how the brain might learn and offer new ways of approaching a broad class of machine learning problems.
arXiv Detail & Related papers (2022-07-04T11:27:08Z) - lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for
Efficient Unsupervised Continual Learning on Autonomous Agents [14.916996986290902]
We propose lpSpikeCon, a novel methodology to enable low-precision SNN processing for efficient unsupervised continual learning.
Our lpSpikeCon can reduce weight memory of the SNN model by 8x (i.e., by judiciously employing 4-bit weights) for performing online training with unsupervised continual learning.
arXiv Detail & Related papers (2022-05-24T18:08:16Z) - Neuromodulated Neural Architectures with Local Error Signals for
Memory-Constrained Online Continual Learning [4.2903672492917755]
We develop a biologically-inspired light weight neural network architecture that incorporates local learning and neuromodulation.
We demonstrate the efficacy of our approach on both single task and continual learning setting.
arXiv Detail & Related papers (2020-07-16T07:41:23Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.