A Biologically Plausible Audio-Visual Integration Model for Continual
Learning
- URL: http://arxiv.org/abs/2007.08855v2
- Date: Tue, 20 Jul 2021 09:21:16 GMT
- Title: A Biologically Plausible Audio-Visual Integration Model for Continual
Learning
- Authors: Wenjie Chen, Fengtong Du, Ye Wang, Lihong Cao
- Abstract summary: We propose a novel biologically plausible audio-visual integration model (AVIM)
We use multi-compartment Hodgkin-Huxley neurons to build the model and adopt the calcium-based synaptic tagging and capture as the model's learning rule.
Our experimental results show that the proposed AVIM can achieve state-of-the-art continual learning performance.
- Score: 7.680515385940673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The problem of catastrophic forgetting has a history of more than 30 years
and has not been completely solved yet. Since the human brain has natural
ability to perform continual lifelong learning, learning from the brain may
provide solutions to this problem. In this paper, we propose a novel
biologically plausible audio-visual integration model (AVIM) based on the
assumption that the integration of audio and visual perceptual information in
the medial temporal lobe during learning is crucial to form concepts and make
continual learning possible. Specifically, we use multi-compartment
Hodgkin-Huxley neurons to build the model and adopt the calcium-based synaptic
tagging and capture as the model's learning rule. Furthermore, we define a new
continual learning paradigm to simulate the possible continual learning process
in the human brain. We then test our model under this new paradigm. Our
experimental results show that the proposed AVIM can achieve state-of-the-art
continual learning performance compared with other advanced methods such as
OWM, iCaRL and GEM. Moreover, it can generate stable representations of objects
during learning. These results support our assumption that concept formation is
essential for continuous lifelong learning and suggest the proposed AVIM is a
possible concept formation mechanism.
Related papers
- Structural features of the fly olfactory circuit mitigate the stability-plasticity dilemma in continual learning [46.74846593421828]
We introduce the fly olfactory circuit as a plug-and-play component, termed the Fly Model, which can integrate with modern machine learning methods.
Our findings demonstrate that the Fly Model enhances both memory stability and learning plasticity, overcoming the limitations of current continual learning strategies.
arXiv Detail & Related papers (2025-02-03T15:06:11Z) - Advancing Brain Imaging Analysis Step-by-step via Progressive Self-paced Learning [0.5840945370755134]
We introduce the Progressive Self-Paced Distillation (PSPD) framework, employing an adaptive and progressive pacing and distillation mechanism.
We validate PSPD's efficacy and adaptability across various convolutional neural networks using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset.
arXiv Detail & Related papers (2024-07-23T02:26:04Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - Meta-Learning in Spiking Neural Networks with Reward-Modulated STDP [2.179313476241343]
We propose a bio-plausible meta-learning model inspired by the hippocampus and the prefrontal cortex.
Our new model can easily be applied to spike-based neuromorphic devices and enables fast learning in neuromorphic hardware.
arXiv Detail & Related papers (2023-06-07T13:08:46Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Multimodal foundation models are better simulators of the human brain [65.10501322822881]
We present a newly-designed multimodal foundation model pre-trained on 15 million image-text pairs.
We find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.
arXiv Detail & Related papers (2022-08-17T12:36:26Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Learning Temporal Dynamics from Cycles in Narrated Video [85.89096034281694]
We propose a self-supervised solution to the problem of learning to model how the world changes as time elapses.
Our model learns modality-agnostic functions to predict forward and backward in time, which must undo each other when composed.
We apply the learned dynamics model without further training to various tasks, such as predicting future action and temporally ordering sets of images.
arXiv Detail & Related papers (2021-01-07T02:41:32Z) - Brain-inspired global-local learning incorporated with neuromorphic
computing [35.70151531581922]
We report a neuromorphic hybrid learning model by introducing a brain-inspired meta-learning paradigm and a differentiable spiking model incorporating neuronal dynamics and synaptic plasticity.
We demonstrate the advantages of this model in multiple different tasks, including few-shot learning, continual learning, and fault-tolerance learning in neuromorphic vision sensors.
arXiv Detail & Related papers (2020-06-05T04:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.