Uncertainty-based Modulation for Lifelong Learning
- URL: http://arxiv.org/abs/2001.09822v1
- Date: Mon, 27 Jan 2020 14:34:37 GMT
- Title: Uncertainty-based Modulation for Lifelong Learning
- Authors: Andrew Brna, Ryan Brown, Patrick Connolly, Stephen Simons, Renee
Shimizu, Mario Aguilar-Simon
- Abstract summary: We present an algorithm inspired by neuromodulatory mechanisms in the human brain that integrates and expands upon Stephen Grossberg's Adaptive Resonance Theory proposals.
Specifically, it builds on the concept of uncertainty, and employs a series of neuromodulatory mechanisms to enable continuous learning.
We demonstrate the critical role of developing these systems in a closed-loop manner where the environment and the agent's behaviors constrain and guide the learning process.
- Score: 1.3334365645271111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The creation of machine learning algorithms for intelligent agents capable of
continuous, lifelong learning is a critical objective for algorithms being
deployed on real-life systems in dynamic environments. Here we present an
algorithm inspired by neuromodulatory mechanisms in the human brain that
integrates and expands upon Stephen Grossberg\'s ground-breaking Adaptive
Resonance Theory proposals. Specifically, it builds on the concept of
uncertainty, and employs a series of neuromodulatory mechanisms to enable
continuous learning, including self-supervised and one-shot learning. Algorithm
components were evaluated in a series of benchmark experiments that demonstrate
stable learning without catastrophic forgetting. We also demonstrate the
critical role of developing these systems in a closed-loop manner where the
environment and the agent\'s behaviors constrain and guide the learning
process. To this end, we integrated the algorithm into an embodied simulated
drone agent. The experiments show that the algorithm is capable of continuous
learning of new tasks and under changed conditions with high classification
accuracy (greater than 94 percent) in a virtual environment, without
catastrophic forgetting. The algorithm accepts high dimensional inputs from any
state-of-the-art detection and feature extraction algorithms, making it a
flexible addition to existing systems. We also describe future development
efforts focused on imbuing the algorithm with mechanisms to seek out new
knowledge as well as employ a broader range of neuromodulatory processes.
Related papers
- Ornstein-Uhlenbeck Adaptation as a Mechanism for Learning in Brains and Machines [2.1168858935852013]
We introduce a novel approach that leverages noise in the parameters of the system and global reinforcement signals.
operating in continuous time, Orstein-Uhlenbeck adaptation (OUA) is proposed as a general mechanism for learning.
OUA provides a viable alternative to traditional gradient-based methods, with potential applications in neuromorphic computing.
arXiv Detail & Related papers (2024-10-17T14:00:18Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Towards Biologically Plausible Computing: A Comprehensive Comparison [24.299920289520013]
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning.
The biological plausibility of backpropagation is questioned due to its requirements for weight symmetry, global error computation, and dual-phase training.
In this study, we establish criteria for biological plausibility that a desirable learning algorithm should meet.
arXiv Detail & Related papers (2024-06-23T09:51:20Z) - Stochastic Online Optimization for Cyber-Physical and Robotic Systems [9.392372266209103]
We propose a novel online framework for solving programming problems in the context of cyber-physical and robotic systems.
Our problem formulation constraints model the evolution of a cyber-physical system, which has, in general, a continuous state and action space space is nonlinear.
We show that even rough estimates of the dynamics can significantly improve the convergence of our algorithms.
arXiv Detail & Related papers (2024-04-08T09:08:59Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Learning low-dimensional dynamics from whole-brain data improves task
capture [2.82277518679026]
We introduce a novel approach to learning low-dimensional approximations of neural dynamics by using a sequential variational autoencoder (SVAE)
Our method finds smooth dynamics that can predict cognitive processes with accuracy higher than classical methods.
We evaluate our approach on various task-fMRI datasets, including motor, working memory, and relational processing tasks.
arXiv Detail & Related papers (2023-05-18T18:43:13Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.